id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
2ba2b1fa5a67f46e7509b0120f061a6560a607df
|
Learning Off-By-One Mistakes
An Empirical Study
Sellik, Hendrig; van Paridon, Onno; Gousios, Georgios; Aniche, Maurício
DOI
10.1109/MSR52588.2021.00019
Publication date
2021
Document Version
Accepted author manuscript
Published in
2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.
Learning Off-By-One Mistakes: An Empirical Study
Hendrig Sellik
*Delft University of Technology*
Delft, The Netherlands
sellikhendrig@gmail.com
Onno van Paridon
*Adyen N.V.*
Amsterdam, The Netherlands
onno.vanparidon@adyen.com
Georgios Gousios, Mauricio Aniche
*Delft University of Technology*
Delft, The Netherlands
{g.gousios,m.f.aniche}@tudelft.nl
Abstract—Mistakes in binary conditions are a source of error in many software systems. They happen when developers use, e.g., ‘<’ or ‘>’ instead of ‘<=’ or ‘>=’. These boundary mistakes are hard to find and impose manual, labor-intensive work for software developers.
While previous research has been proposing solutions to identify errors in boundary conditions, the problem remains open. In this paper, we explore the effectiveness of deep learning models in learning and predicting mistakes in boundary conditions. We train different models on approximately 1.6M examples with faults in different boundary conditions. We achieve a precision of 85% and a recall of 84% on a balanced dataset, but lower numbers in an imbalanced dataset. We also perform tests on 41 real-world boundary condition bugs found from GitHub, where the model shows only a modest performance. Finally, we test the model on a large-scale Java code base from *Adyen*, our industrial partner. The model reported 36 buggy methods, but none of them were confirmed by developers.
Index Terms—machine learning for software engineering, deep learning for software engineering, software testing, boundary testing.
I. INTRODUCTION
Off-by-one mistakes happen when developers do not correctly implement a boundary condition in the code. Such mistakes often occur when developers use ‘>’ or ‘<’ in cases where they should have used ‘=>’ or ‘=<’, or vice versa.
Take the example of an off-by-one error in the Gson library1, which we illustrate in Figure 1. The `toFind.length() < limit` condition is wrong. The fix changes the `<` operator by the `<=` operator. Such mistakes are particularly difficult to find in source code. After all, the result of the program is not always obviously wrong, as it is “merely off by one”. In most cases, the mistake will lead to an “out of bounds” situation, which will then result in an application crash.
A large body of knowledge in the software testing field is dedicated to (manual) boundary testing techniques (e.g., [1, 2, 3, 4, 5]). However, manually inspecting code for off-by-one errors is time-consuming since determining which binary operator is the correct one is usually heavily context-dependent. The industry has been relying on static analysis tools, such as SpotBugs2 or PVS-Studio3. SpotBugs promises to identify possible infinite loops, as well as array indices, offsets, lengths, and indexes that are out of bounds. PVS-Studio also tries to identify mistakes in conditional statements and indexes that are out of bounds in array manipulation. And while they can indeed find some of them, many of them go undetected. As we later show in this paper, none of the real-world off-by-one errors could be detected by the state-of-the-practice static analysis tools.
We conjecture that, for a tool to be able to precisely identify mistakes in boundary conditions, it should be able to capture the overall context of the source code under analysis. Understanding the context of the source code has been traditionally a challenge for static analysis techniques. However, recent advances in machine and deep learning have shown that models can learn useful information from the syntactic and semantic information that exist in source code. Tasks that were deemed not possible before, such as method naming [6, 7, 8], type inference [9, 10, 11], and bug finding [12], are now feasible. The lack of reliable tools that detect off-by-one mistakes leaves an excellent opportunity for researchers to experiment with machine learning approaches.
Inspired by the code2vec and code2seq models proposed by Alon et al. [8, 13], we trained several deep learning models on likely correct methods and their counterparts affected by off-by-one mistakes. The models are trained on over 1.6M examples, and the best results are obtained with the Code2Seq [13] model achieving 85% precision and a recall of 84% on a balanced testing set. However, our results also show that the model, when tested on a real-world dataset that consisted of 41 bugs in open-source systems, yields low performance (55% precision and 46% recall).
Finally, we tested the best models in one of our industrial partners. *Adyen* is one of the world’s largest payment service providers allowing customers from over 150 countries to use over 250 payment methods including different internet bank transfers and point of sales solutions. The company is working in a highly regulated banking industry and combined with the high processing volumes there is little to no room for errors. Hence, *Adyen* uses the industry-standard best practices for early bug detection such as code reviews, unit testing, and static analysis. It is at *Adyen’s* best interest to look into novel tools to prevent software defects finding their way into their large code base, preferring methods that scale and do not waste the most expensive resource of the company, the developers’ time. Our results show that, while the model did not reveal any bugs per se, it pointed developers to code that they considered
1https://github.com/google/gson/commit/161b4ba
2https://spotbugs.github.io
3https://www.viva64.com/en/pvs-studio/
private boolean skipTo(String toFind) throws IOException {
outer:
for (; pos + toFind.length() < limit ||
fillBuffer(toFind.length()); pos++) {
for (int c = 0; c < toFind.length(); c++) {
if (buffer[pos + c] != toFind.charAt(c)) {
continue outer;
}
}
}
...}
Fig. 1: A off-by-one error in the Gson library, fixed in commit #161b4bu. The mistake is in the toFind.length() < limit condition; the fix changes the < by <=.
to deviate from their good practices.
This paper expands our workshop paper, entitled “OffSide: Learning to Identify Mistakes in Boundary Conditions” [14]. The main contributions of this paper are:
1) An empirical study on the performance of different deep learning models, based on code2vec and code2seq, to detect off-by-one mistakes.
2) A quantitative and qualitative evaluation of deep off-by-one detection models in real-world open-source bugs and in a large-scale industrial system.
II. RELATED WORK
The use of static analysis tools is quite common among software development teams (e.g., [15, 16]). These tools, however, rely on bug pattern detectors that are manually crafted and fine-tuned by static analysis experts. The vast amount of different bug patterns makes it very difficult to cover more than a fraction of them.
Machine Learning for Software Engineering has seen rapid development in recent years inspired by the successful application in the Natural Language Processing field [17]. It is applied in many tasks related to software code such as code translation (e.g., [18]), type inference (e.g., [9, 10]), code refactoring (e.g., [19]) and, as we list below, bug identification.
Pradel et al. [12] use a technique similar to Word2Vec [20] to learn embeddings for JavaScript code tokens extracted from the AST. These embeddings are used to train two-layer feed-forward binary classification models to detect bugs. Each trained model focuses on a single bug type, and the authors test it on problems such as wrong binary operator, wrong operand in binary operation and swapped function arguments. These models do not use all the tokens from the code, but only those specific to the problem at hand. For example, the model that detects swapped function arguments only uses embeddings of the function name and arguments with a few other AST nodes as features.
Allamanis et al. [21] use Gated Graph Neural Network [22] to detect variable misuse bugs on a token level. As an input to the model, the authors use an AST graph of the source code and augment it with additional edges from the control flow graph.
Pascarella et al. [23] show that defective commits are often composed of both defective and non-defective files. They also train a model to predict defective files in a given commit. Habib et al. [24] create an embedding from methods using a one-hot encoding of tokens such as keywords (for, if, etc.), separators (;, (), etc.), identifiers (method, variable names and literals (values such as "abc" and 10). The embeddings for the first 50 tokens are then used to create a binary classification model. The oracle for training data is a state-of-the-art static analysis tool, and the results show that neural bug finding can be highly successful for some patterns, but fail at others.
Li et al. [25] use method AST in combination with a global Program Dependency Graph and Data Flow Graph to determine whether the source code in a given method is buggy or not. The authors use Word2Vec to extract AST node embeddings with a combination of GRU Attention layer and Attention Convolutional Layer to build a representation of the method’s body. Node2Vec [26] is used to create a distributed representation of the data flow graph of the file which the inspected method is in. The results are combined into a method vector which is used to make a softmax prediction.
Wang et al. [27] define bug prediction as a binary classification problem and train three different graph neural networks based on control flow graphs of Java code. They use a novel interval-based propagation mechanism to more efficiently generalize a Graph Neural Network (GNN). The resulting method embedding is fed into a feed-forward neural network to find null-pointer de-reference, array index out of bounds and class cast exceptions. For each bug type, a separate bug detector is trained.
III. APPROACH
In order to detect off-by-one errors in Java code, we aim to create a hypothesis function that will calculate output based on the inputs generated from an example. More specifically, we train and compare different binary classification machine learning models to classify Java source code methods to one of the two possible output labels which are “defective” and "non-defective". If a method is considered as "defective", it is suffering from an off-by-one error, otherwise, it is deemed to be clear from errors.
These models are based on the Code2Vec [8] and Code2Seq [13] models, state-of-the-art deep learning models originally developed for generating method names and descriptions. The models use Abstract Syntax Tree paths of a method as features and create an embedding by combining them with the help of an attention mechanism. In addition, we also build a Random Forest baseline model based on source code tokens.
We acquired the datasets necessary for the training of these models from the work of Alon et al. [8] which results in an imbalanced dataset of 920K examples (1 to 10 ratio) and a balanced dataset of 1.6M examples when combined with our automatically mutated methods.
We train on both imbalanced and balanced data to see the difference in performance. We then evaluate the accuracy of the model in 41 real-world open source off-by-one errors. In
Fig. 2: The flow of the research including data collection, mutation, training and testing.
TABLE I: The different datasets used in this paper.
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Train</th>
<th>Validation</th>
<th>Test</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>large-balanced</td>
<td>1,593,610</td>
<td>30,634</td>
<td>48,516</td>
<td>1,672,760</td>
</tr>
<tr>
<td>large-imbalanced</td>
<td>876,485</td>
<td>16,849</td>
<td>26,684</td>
<td>920,018</td>
</tr>
<tr>
<td>Adyen</td>
<td>11,032</td>
<td>690</td>
<td>3,148</td>
<td>14,870</td>
</tr>
<tr>
<td>open-source bugs</td>
<td>-</td>
<td>-</td>
<td>82</td>
<td>82</td>
</tr>
</tbody>
</table>
addition, we further train the models with data from a company project to fine-tune the model and find bugs from that project specifically.
In Figure 2, we show the overall research method. In the following, we provide a more detailed description of the approach and research questions.
A. Datasets
We used the java-large dataset provided by Alon et al. [13] for model training. We used Adyen’s production Java code to further train and test the model with project-specific data. Finally, we used an additional real-world-bugs dataset to evaluate models on real-world bugs. A summary of the datasets can be seen in Table I.
1) The java-large-balanced dataset consists of 9,500 top-starred Java projects from GitHub created since January 2007. Out of those 9500 projects, 9000 were randomly selected for the training set, 250 for the validation set and the remaining 300 were used for the testing set. Originally, this dataset contained about 16M methods, but 836,380 were candidates for off-by-one errors (e.g., methods with loops and if conditions containing binary operator <, <=, > or >=). After mutating the methods, the final balanced dataset consisted of 1,672,760 methods, 836,380 of them assumed to be correct and 836,380 assumed to be buggy.
2) The additional imbalanced dataset java-large-imbalanced was constructed to emulate more realistic data, where the majority of the code is not defective. A 10-to-1 ratio between non-defective and defective methods was chosen since it resulted in a high precision while having a reasonable recall. We empirically observed that upon increasing the ratio of non-defective methods even further, the model did not return possibly defective methods when running on Adyen’s codebase. Meaning that if the ratio was higher than 10-to-1, the recall of the model became too low to use it.
3) Adyen’s code is a repository containing the production Java code of the company. It consists of over 200,000 methods out of which 7,435 contain a mutation candidate to produce an off-by-one error. After mutating the methods, this resulted in a balanced dataset containing 14,870 data points.
4) 41 real-world bugs in boundary conditions were used for manual evaluation. We extracted the bugs from the 500 most starred GitHub Java projects. The analyzed projects were not part of the training and evaluation sets and thus are not seen by a model before testing. Using a Pydriller script [28], we extracted a list of candidate commits where authors made a change in a comparator (e.g., a “>” to “>=” or “<” to “<=”, etc.). This process returned a list of 1,571 candidate commits which were analyzed manually until 41 were confirmed to be off-by-one errors and added to the dataset. The manual analysis was stopped due to it being a very labor-intensive process.
B. Generating positive and negative instances
In order to train a supervised binary classification model, we require defective examples. To get those, we modified the existing likely correct code to produce likely incorrect code. For each method, we found a list of possible mutation points and selected a random one. After this, we altered the selected binary expressions using JavaParser to generate an off-by-one error.
Due to changing only one of the expressions, the equivalent mutant problem does not exist for the training examples, unless the original code was unreachable at the position of the mutation. It is also important to note that the datasets are split on a project level for the java-large dataset and on a sub-module level for Adyen’s code. This means that the positive and the negative examples both end up in the same training, validation or test set. We did this to avoid evaluating model predictions on a code that only had one binary operator changed compared to the code that was used during training.
C. Model Architecture
The models we used in this work are based on the recent Code2Vec model [8] and its enhancement Code2Seq [13], and a baseline model that makes use of random forest. We describe the models in more detail in the next sub-sections.
4JavaParser GitHub page https://github.com/javaparser/javaparser/
5Equivalent mutant problem may exist, for example, if we mutate “dead code”. However, we conjecture that this is a negligible problem and will not affect the results.
1) Code2Vec: The Code2Vec model created by Alon et al. [8] is a Neural Network model used to create embeddings from Java methods. These embeddings were used in the original work to predict method names.
The architecture of this model requires Java methods to be split into path contexts based on the AST of the method. A path context is a random path between two nodes in the AST and consists of two terminal nodes $x_s$, $x_t$ and the path between those terminal nodes $p_j$ which does not include the terminals. The embeddings for those terminal nodes and paths are learned during training and stored in two separate vocabularies. During training, these paths are concatenated to a single vector to create a context vector $c_i$ which has the length $l$ of $2 \cdot x_s + x_p$ where the length of $x_s$ is equal to $x_t$.
The acquired context vectors $c_i$ for paths are passed through the same fully connected (dense) neural network layer (using the same weights). The network uses hyperbolic tangent activation function and dropout in order to generate a combined context vector $\tilde{c}_i$. The size of the dense layer allows controlling the size of the resulting context vector.
The attention mechanism of the model works by using a global attention vector $a \in \mathbb{R}^n$ which is initialized randomly and learned with the rest of the network. It is used to calculate attention weight $a_i$ for each individual combined context vector $\tilde{c}_i$.
It is possible that some methods are not with a large enough AST to generate the required number of context paths. For this dummy (masked) context paths are inputted to the model which get a value of zero for attention weight $a_i$. This enables the model to use examples with the same shape.
During training, a tag vocabulary $\text{tags\_vocab} \in \mathbb{R}^{|Y| \times l}$ is created where for each tag (label) $y_i \in Y$ corresponds to an embedding of size $l$. The tags are learned during training and in the task proposed by the authors, these represent method names.
A prediction for a new example is made by computing the normalized dot product between code vector $v$ and each of the tag embeddings $\text{tags\_vocab}$, resulting in a probability for each tag $y_i$. The higher the probability, the more likely the tag belongs to the method.
2) Code2Seq: The Code2Seq model created by Alon et al. [13] is a sequence-to-sequence model used to create embeddings from Java methods from which method descriptions are learned. The original work was used to generate sequences of natural language words to describe methods.
Similarly to the Code2Vec model, the model works by generating random paths from the AST with a specified maximum length. Each path consists of 2 terminal tokens $x_s$, $x_t$ and the path between those terminal nodes $p_j$ which, in Code2Seq, includes the terminal nodes $p_s$, $p_t \in p_j$, but not tokens.
It is important to make a difference between terminal tokens and path nodes. The former are user-defined values, such as a number 4 or variable called stringBuilder while the latter come from a limited set of AST constructs such as NameExpr, BlockStmt, ReturnStmt. There are around 400 different node types that are predefined in the JavaParser implementation\(^6\).
During training, the path nodes and the terminal tokens are encoded differently. Terminal tokens get partitioned into subtokens based on the camelCase notation, which is a standard coding convention in Java. For example, a terminal token stringBuilder will be partitioned into string and Builder. The subtokens are turned into embeddings with a learned matrix $E_{\text{subtokens}}$ and encoding is created for the entire token by adding the values for subtokens.
Paths of the AST are also split into nodes and each of the nodes corresponds to a value in a learned embedding matrix $E_{\text{nodes}}$. These embeddings are fed into a bi-directional LSTM which final states result in a forward pass output $\overrightarrow{h}$ and backward pass output $\overleftarrow{h}$. These are concatenated to produce a path encoding.
As with the Code2Vec model, the encodings of the terminal nodes and the path are concatenated and the resulting encoding is an input to a dense layer with $\tanh$ activation to create a combined context vector $\tilde{c}_i$. Finally, to provide an initial state to the decoder, the representations of all $n$ paths in a given method are averaged.
The decoder uses the initial state $h_0$ to generate an output sequence while attending over all the combined context vectors $\tilde{c}_1, ..., \tilde{c}_n$. The resulting output sequence represents a natural language description of the method. We adapted Code2Seq’s sequence output to be $\{(0|1),<eos>\}$, i.e., a 1 or 0 token indicating the method being buggy or not buggy, and a token that ends the sequence.
The advantage of the Code2Seq model is in the way the context vectors $\tilde{c}_i$ are created. In particular, due to splitting terminal nodes. The vocabulary of the terminal nodes yields greater flexibility towards different combinations of sub-token combinations. In addition, while Code2Vec embeds entire AST paths between terminals, the Code2Seq model only embeds sub-tokens. This results in fewer out-of-vocabulary examples and a far smaller model size. The model also has an order-of-magnitude fewer parameters compared to the Code2Vec model.
3) Baseline Model: We developed a baseline model to assess the performance of a simpler architecture. For this, we used a Random Forest model [29] and compared the performance with the same datasets.
First, we tokenized the Java methods using leaf nodes of their respective ASTs. After this, all the tokens of the method were vectorized using the TF-IDF method. The vectorized tokens of one method comprised a training example for a Random Forest model. This model was then trained on all of the methods from java-large training set.
D. Hyper-parameter optimization and model training
For hyper-parameter optimization, we used Bayesian optimization [30]. We selected model precision as the optimization
---
parameter since high precision is required to obtain a usable defect prediction model. We used Bayesian optimization over other methods like random search or grid search because it enables us to generate a surrogate function that is used to search the hyper-parameter space based on previous results and acts as intuition for parameter selection. This results in saving significantly more time because the actual model does not need to run as much due to wrong parameter ranges being discarded early in the process.
The hyper-parameters are optimized in the java-med-balanced, another dataset made public by Alon et al. [8]. The dataset consists of 1,000 top-starred Java projects from GitHub. Out of those 1000 projects, 800 were randomly selected for the training set, 100 for validation set and the remaining 100 were used for the testing set. Originally, this dataset contained about 4M methods, but 170,295 were considered as bugs. There were 100 methods assumed to be correct and 170,295 assumed to be buggy.
We ran optimization for four different scenarios. Two runs for the balanced java-medium dataset with Code2Vec model and Code2Seq models, respectively, and an additional two runs with the same models for imbalanced datasets. We used a machine with Intel(R) Xeon(R) CPU E5-2660 v3 processor running at 2.60GHz with a Tesla M60 graphics card.
Once the hyper-parameters were identified, we train the Code2Vec and Code2Seq models (as well as the baseline) using the balanced and imbalanced versions of the java-large dataset, and perform further training with the source code of our industrial partner. We show the training time of the final models in Table II.
### E. Analysis
We report the precision and recall of our models. Precision helps to evaluate the models’ ability to classify negative examples as positive. The latter is also known as false positive. This means that a model with high precision has a low false-positive rate and a model with low precision has a high false-positive rate. More formally, precision is the number of true positive (TP) predictions divided by the sum of true positive and false positive (FP) predictions.
For a bug detection model, low precision means a high number of false positives, making the developers spend their time checking a large number of errors reported by the model only to find very few predictions that are defective. This means that in this work, we prefer high precision for a bug-detection model.
Monitoring precision alone is not enough since a model that is precise but only predicts few bugs per thousands of bugs is also not useful. Hence, recall is also measured. It measures the models’ ability to find all the defective examples from the dataset. A recall of a model is low when it does not find many of the positive examples from the dataset and very high if it manages to find all of them. More formally, it is the number of true positive predictions divided by the sum of true positive and false negative predictions.
Ideally, a bug prediction model would find all of the bugs from the dataset and have a high recall score. However, deep learning networks usually do not achieve perfect precision and recall at the same time. For more difficult problems with a probabilistic model, there can be a trade-off. When increasing the threshold of the model confidence for the positive example, the recall will decline. For this reason, a sci-kit learn package was used to also make a precision-recall curve to observe the effect of the change in precision and recall upon changing the confidence of the model needed to classify an example as positive (defective).
### F. Reproducibility
We provide all the source code (data collection, preprocessing, and machine learning models) in our online appendix [31]. The source code is also available in GitHub\(^7\).
### IV. METHODOLOGY
The goal of this study is to measure the effectiveness of deep learning models in identifying off-by-one mistakes. To that aim, we propose three research questions:
- **RQ1:** How do the models perform on a controlled dataset? In order to obtain a vast quantity of data, we use a controlled dataset (see Section III-A). We train the models on the dataset and use metrics such as precision and recall to assess the performance.
- **RQ2:** How well do the methods generalize to a dataset made of real-world bugs? We mine a dataset of real-world off-by-one error bugs from GitHub issues of various open-source projects. Then we use a model to predict the error-proneness of a method before and after a fix. This will indicate how well the model works for real-world data. This evaluation will enable us to extract the precision metric and compare it to the one from RQ1.
- **RQ3:** Can the approach be used to find bugs from a large-scale industry project? One useful application to an error-detection model is to analyze the existing project and notify of methods containing off-by-one errors. We make several runs where the model is firstly trained on a dataset with mutated code and then tested on real code to find such errors. In addition, we further train the model with a different version of the industry project to find errors in the future versions of the project.
---
\(^7\)https://github.com/hsellik/thesis/tree/MSR-2021
To answer $RQ_1$, we performed hyper-parameter optimization. After this, we selected the best hyper-parameter values and trained the model with randomly initialized parameters on the java-large dataset on the same machine as used for hyper-parameter optimization (see Section III-D). We trained Code2Seq and Code2Vec models until there was no gain in precision for three epochs of training in the evaluation set. After this, we assessed the model on the testing set of java-large dataset.
The process was conducted for three different configurations of data. These were:
1) BB - the training data was balanced (B) with the cross-validation and testing data also being balanced (B).
2) BI - the training data was balanced (B) with the cross-validation and testing data being imbalanced (I).
3) II - the training data was imbalanced (I) with the cross-validation and the testing data also being imbalanced (I).
The data imbalance was inspired by the work of Habib et al. [24], who reported that a bug detection model trained on a balanced dataset would have poor performance when testing on a more real-life scenario with imbalanced classes.
To answer $RQ_2$, we selected the best-performing model on the controlled java-large testing set (see Table III), which was the model based on the Code2Seq architecture. After this, the model was tested on the bugs and their fixes found from several real-world Java projects (open-source bugs dataset in Table I).
Firstly, we tested the model on the correct code that was obtained from the GitHub diff after the change to see the classification performance on non-defective code. To test the model performance on defective code, we reverted the example to the state where the bug was present using the git version control system. After this, we recorded the model prediction on the defective method.
In addition, as a way to compare our work with static analysis, we apply three popular static analyzers to the same set of defective and non-defective snippets: SpotBugs (v.4.0.0-beta1), PVS-Studio (v.7.04.34029), and the static analyzer integrated with IntelliJ IDEA (v. 2019.2.3).
To answer $RQ_3$, we trained the Code2Seq model only on the data generated from the company project, but the training did not start with randomly initialized weights. Instead, the process was started with the weights acquired after training on the java-large dataset (see Figure 2).
We selected the Code2Seq based model because it had the best performance on the imbalanced testing set of the controlled java-large set. We selected the performance on the imbalanced controlled set as a criterion since we assumed that the company project also contains more non-defective examples than defective ones.
We used the pre-trained model because the company project alone did not contain enough data for the training process. Additionally, due to the architecture of the Code2Seq and Code2Vec models, the embeddings of terminal and AST node vocabularies did not receive additional updates during further training with company data. We trained the model until there was no gain in precision for three epochs on the validation set, and after this, we tested the model on the test set consisting of controlled Adyen data.
We conducted additional checking on Adyen data by trying to find bugs in the most recent version of the project. More specifically, we updated the project to its most recent version using their git version control system, and without any modifications to their original code, we used the model to predict whether every Java method in their code base had a off-by-one mistake. We analyzed all bug predictions that were over a threshold of 0.8 to see if they contained bugs. The 0.8 threshold was defined after manual experimentation. We aimed at a set of methods that were large enough to bring us interesting conclusions, yet small enough to enable us to manually verify each of them.
A. Threats to Validity
In this section, we discuss the threats to the validity of this study and the actions we took to mitigate them.
1) Internal validity: Our method performs mutations to generate faulty examples from likely correct code by editing one of the binary condition within the method. This means that while the correct examples represent a diverse set of methods from open-source projects, the likely incorrect methods may not represent a realistic distribution of real-world bugs. This affects the model that is being trained with those examples and also the testing results conducted on this data.
2) External validity: While the work included a diverse set of open-source projects, the only closed-source project that was used during this study was Adyen’s. Hence, the closed-source projects (in training and in validation) are under-represented in this study.
Moreover, we have only experimented with Java code snippets. While the approach seems to be generic enough to be applicable to any programming language, the results might vary given the particular way that developers write code in different communities. Therefore, more experimentation needs to be conducted before we can argue that the results generalize to any programming language.
V. Results
In the following sections, we present the results of our research questions.
A. $RQ_1$: How do the models perform on a controlled dataset?
In Table III, we show the precision and recall of the different models. In Figures 3a and 3b, we show the ROC curve and the precision-recall curve of the experiment with Code2Seq based model for the imbalanced java-large dataset.
Observation 1: Models present high precision and recall when trained and tested with balanced data. The results show that when training models on a balanced dataset with an equal amount of defective/non-defective code and then testing the same model on a balanced testing set, both Code2Vec and Code2Seq models achieve great precision and recall where the Code2Seq based model has better precision (85.23% vs
80.11%) and recall (84.82% vs 77.01%) compared to the Code2Vec based model. In addition, the balanced models' performance was compared to the one used in Offside [14], our previous work exploring only the use of Code2Vec model, which was also tested on the identical java-large dataset using very similar preprocessing pipeline and training model (80.11% vs 80.9% precision and 77.01% vs 75.6% recall).
**Observation 2**: The metrics drop considerably when tested on an imbalanced dataset. When simulating a more real-life scenario and creating an imbalance in the testing set with more non-defective methods, the recall of the models remained similar with recall increasing from 84.82 to 84.86 for the Code2Seq model and dropping from 77.01% to 75.53% for the Code2Vec model. However, the precision of the models reduced drastically with the Code2Seq model dropping from 85.23% to 36.08% and Code2Vec model from 80.11% to 28.52%. The baseline model also drops in precision from 50% to 8.99% while keeping the same recall.
**Observation 3**: The low precision can be mitigated by training on an imbalanced dataset, but at the cost of recall. We trained Code2Seq and Code2Vec models on an imbalanced dataset and results show that the precision score for imbalanced data returned almost to the same level for the Code2Seq-based model (83.04% vs 85.23%), but remained lower for the Code2Vec-based model (64.65% vs 80.11%). However, the recall declined drastically from 84.82% to 42.34% for the Code2Seq model and from 77.01% to 41.00% for the Code2Vec model.
When analysing the ROC curve (Figures 3a and 3b), the precision is ≈0.8 while recall remains ≈0.5 at a confidence threshold of 0.8. Moreover, it can also be seen that the model confidence is correlated where higher thresholds yield better precision but lower recall.
**RQ1** summary: Both Code2Seq and Code2Vec based models present high accuracy on a balanced dataset. The numbers drop when we make use of imbalanced (i.e., more similar to the real-world) datasets.
B. **RQ2**: How well do the methods generalize to a dataset made of real-world bugs?
The performance of the model on the 41 real-world boundary mistakes and their non-defective counterparts are presented in Table IV.
**Observation 4**: The model can detect real-world bugs, but with a high false-positive rate. Out of the 41 defective methods, 19 (46.34%) were classified correctly and out of 41 correct methods, 26 (63.41%) were classified correctly. The precision and recall scores of 55.88 and 46.34 were achieved while evaluating the model on real-world bugs with the Code2Seq model trained on balanced data using a threshold of 0.5. Compared to the results from the java-large testing set with augmented methods, the results are significantly lower with precision and recall being 29.35 and 38.08 points lower respectively (see metrics for Code2Seq model with Experiment BB in Table III).
**Observation 5**: The state-of-the-practice linting tools did not find any of the real-world bugs. As an interesting remark, none of the bugs was identified by any of the state-of-the-practice linting tools we experimented. This reinforces the need for approaches that identify such bugs (by means of static analysis or deep learning).
**RQ2** summary: The model presents only reasonable performance on real-world off-by-one mistakes in open-source projects. Static analysis tools did not detect any bug.
C. **RQ3**: Can the approach be used to find bugs from a large-scale industry project?
We present the accuracy of the model in our industrial partner, Adyen, also in Table III.
**Observation 6**: Models trained on open-source data show satisfactory results in the industry dataset. Our empirical findings show that when a model is trained on an open-source dataset and then applied to the company project (following the same pipeline of mutating methods as to generate positive and negative instances), it will have good precision and recall.
scores with 71.15% and 24.66% for Code2Seq and somewhat lower 53.85% and 20.46% for Code2Vec model respectively.
Observation 7: Further training on the Adyen project did not yield better results. We hypothesized that training the model further on Adyen’s code base would give a boost in precision and recall scores. The recall of the models improved by 6.0 percentage points for Code2Seq based model and 2.93 for Code2Vec based model. However, the precision of both models dropped by 4.49 percentage points for Code2Seq and 9.9 for Code2Vec.
Observation 8: The model did not reveal any bugs, but 20% of the reported methods were considered suspicious by the developers. Running the model on a newer version of the repository reported 36 potential bugs with a confidence threshold over 0.8 (which we chose after experimenting with different thresholds and analyzing the number of suspicious methods the model returned that we considered feasible to manually investigate). While no bugs were found after manually analyzing all the reported snippets, we marked seven methods as suspicious. When we showed these methods to the developers, they agreed that, while not containing a bug per se, the seven methods deviate from good coding standards and should be refactored. More specifically, four methods had the for loop being initialized at a wrong index (i.e., the for loop was initialized with $i = 1$, but inside the body, the code performed several $i - 1$) and three snippets had hard-coded unusual constraints in the binary expression (i.e., $a > 256$, where 256 is a specific business constraint). Interestingly, Pradel and Sen [12] also observed that models can sometimes point to pieces of code that are not buggy, but highly deviated from coding standards.
Observation 9: The model can potentially be useful at commit time, however, the number of false alarms is to be considered. Fixing mistakes regarding good code practices for old pieces of software might not be considered worthwhile at large companies, given the possible unwanted changes to the behavior of the software. However, if such a system were to be employed during automated testing, the alerts might help developers to adhere to better practices. We observed the model pointing to relevant problems in 7 out of the 36 potential bugs (20% of methods it identifies). While 20% might be considered a low number, one might argue that inspecting 36 methods out of a code base that contains thousands of methods is not a costly operation and might be worth the effort. However, we still do not know the number of false negatives that the tool might give, as inspecting all the methods of the code base is unfeasible.
RQ3 summary: When tested on a large-scale industrial software system, the approach did not reveal any bugs per se, but pointed to code considered to deviate from good practices.
VI. Future Work
We see much room for improvement before these models can reliably identify off-by-one errors. In the following, we list the ones we believe to be most urgent:
The need for more data points for the off-by-one problem. In this paper, we leveraged the existing java-large dataset created by Alon et al. [13]. While the entire dataset was built on top of 9,500 GitHub projects and contained approximately 16M methods, only around 836k had binary conditions (e.g., methods with loops and ifs containing a $<$, $<=$, $>$ or $==$). We augment this dataset to 1.6M by introducing the defective samples. Nevertheless, there is a big difference between 16M and 1.6M methods for training. As Alon et al. [8] argues: “a simpler model with more data might perform better than a complex model with little data”. It should be part of any future work to devise a much larger dataset for the off-by-one problem and try the models we experiment here before proposing more complex models.
Moreover, our dataset contains fewer usages of $>= or $<=$ compared to usages of $>$ or $<$, clearly representing the preferences of developers when coding such boundaries. These
differences can lead to biased training and, as a result, we observed models tending to give false positive results in case of \(\geq\) or \(<\). One way to mitigate the issue is to create a balanced dataset with a more equal distribution of binary operators, as well as the distribution of the places of their occurrence (if-conditions, for- and while-loops, ternary expressions, etc).
The challenges of imbalanced data. In this study, we explored the effects of balancing and imbalancing in the effectiveness of the model. However, the real imbalance of the problem in real life (i.e., the proportion between methods with off-by-one mistakes and methods without off-by-one mistakes) is unknown, although we strongly believe it to be imbalanced. Nevertheless, a 10:1 proportion enables us to have an initial understanding how models would handle such high imbalance. Our results show that it indeed negatively affects the performance of the model. Therefore, we suggest researchers to focus their attention on how to make these models better in face of imbalanced datasets.
The support for inter-procedural analysis. Currently, our approach is only supporting the analysis of the AST of one method. However, the behaviour of a method, and the possibility of the bugs thereof, also depends on the contents of the other methods. For example, in recent research by Compton et al. [32], the embedding vectors from the Code2Vec model are concatenated to form an embedding for the entire class. Future work should explore whether class embeddings would perform better.
Experimenting with different (and more recent) architectures. In our work, we mainly looked at Code2Vec and Code2Seq models. We now see more recent models, such as the GREAT model proposed by Hellendoorn et al. [33], which uses transformers and also captures the data-flow of the code. We believe that data-flow information would enhance the performance of our models.
Making use of Byte-Pair Encoding (BPE) techniques. NLP models are often dependent on the vocabulary they are trained on. The out-of-vocabulary (OoV) problem also happens in this work. When testing the models trained on top of open-source data at Adyen, we had to replace unknown tokens by a generic UNK. We conjecture that this may diminish the effectiveness of the models. Unfortunately we did not measure the extent of how many times the UNK token was used in our experiments. We plan to more precisely measure it in future replications of this work. In future work, we also plan to make use of techniques such as Byte-Pair Encoding (BPE) [34, 35], which attempts to mitigate the impact of out-of-vocabulary tokens. We note that the use of BPE is becoming more and more common in software engineering models (e.g., [10, 33, 36]).
A deeper understanding of the differences between our model and Pradel's and Sen's [12] model. The DeepBugs paper explores the effectiveness of deep learning models to a similar problem, which authors call “Wrong Binary Operator”. The overall idea of their approach is similar to ours (in other words, their work also served as inspiration for this one): the negative instances (i.e., the buggy code) are generated through mutations in the positive code (i.e., non-buggy code), the code representation is a vector that is based on the embeddings of all the identifiers in the code, and the classification task is a feed-forward neural network that learns from the balanced set of positive and negative instances. Their results show an accuracy of 89%-92% in the controlled dataset (i.e., slightly higher than our results in RQ1), and a precision of 68% in the manual analysis (i.e., higher than our results in RQ2). Interestingly, authors also observe that the model also reports non-buggy code which deviates from best practices (i.e., similar to our observations in RQ3). When designing this study, we did explicit compare the results to DeepBugs. The embeddings derived from code2vec/seq capture more information, and we conjectured that they would naturally supersede DeepBugs. We nevertheless see a few differences between both works: First, in their “Wrong Binary Operator” task, the mutation replaces the (correct) binary operator to any binary operator, e.g., a \(i <\) length can become a \(i \%\) length. In our case, we limit ourselves only to off-by-one mistakes, i.e., a correct \(i <\) length will always become \(i <=\) length. We conjecture that this may increase the difficulty for the model to learn, as bugs are now slightly more subtle. Second, while the manual analysis conducted in the DeepBugs paper is performed on the testing set (which contains artificial bugs), our RQ2 explores the performance of the model in real-world bugs, i.e., bugs that were found and fixed by developers. This extra reality we bring to the experiment may be the reason for the lower performance. Finally, we assumed that more robust models such as code2vec and code2seq would better capture the intricacies of the off-by-one mistake. The model used in DeepBugs is simpler and yet as accurate as ours. More work is needed to understand the pros and cons of our model and how both works can be combined for the development of better and more accurate models.
VII. Conclusions
Software development practices offer many techniques for detecting bugs at an early stage. However, these methods come with their challenges and are either too labor-intensive or leave a lot of room for improvement. In this paper, we adapted recent state-of-the-art deep learning models to detect off-by-one errors in Java code, which are traditionally hard for static analysis tools due to their high dependency on context.
We concluded that the trained models, while effective in controlled datasets, still do not work well in real-world situations. We see the use of deep learning models to identify off-by-one errors as promising. Nevertheless, there is still much room for improvement, and we hope that this paper helps researchers in paving the road for future studies in this direction.
Acknowledgments
We thank Jón Arnar Briem, Jordi Smit, and Pavel Rapoport for their participation in the workshop version of this paper.
For their participation in the workshop version of this paper.
For their participation in the workshop version of this paper.
REFERENCES
|
{"Source-Url": "http://pure.tudelft.nl/ws/portalfiles/portal/94907506/msr2021_learning_off_by_one.pdf", "len_cl100k_base": 10660, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36360, "total-output-tokens": 13933, "length": "2e13", "weborganizer": {"__label__adult": 0.000415802001953125, "__label__art_design": 0.0003139972686767578, "__label__crime_law": 0.0003268718719482422, "__label__education_jobs": 0.0008821487426757812, "__label__entertainment": 5.370378494262695e-05, "__label__fashion_beauty": 0.00016760826110839844, "__label__finance_business": 0.00017142295837402344, "__label__food_dining": 0.0003027915954589844, "__label__games": 0.0005426406860351562, "__label__hardware": 0.0006666183471679688, "__label__health": 0.0004200935363769531, "__label__history": 0.00016736984252929688, "__label__home_hobbies": 9.042024612426758e-05, "__label__industrial": 0.0002791881561279297, "__label__literature": 0.00022530555725097656, "__label__politics": 0.0002002716064453125, "__label__religion": 0.00037550926208496094, "__label__science_tech": 0.006740570068359375, "__label__social_life": 9.393692016601562e-05, "__label__software": 0.003772735595703125, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00034689903259277344, "__label__transportation": 0.00043082237243652344, "__label__travel": 0.0001901388168334961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55704, 0.03279]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55704, 0.34509]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55704, 0.916]], "google_gemma-3-12b-it_contains_pii": [[0, 1211, false], [1211, 6759, null], [6759, 12539, null], [12539, 17413, null], [17413, 23681, null], [23681, 28983, null], [28983, 34974, null], [34974, 38951, null], [38951, 42986, null], [42986, 49295, null], [49295, 55704, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1211, true], [1211, 6759, null], [6759, 12539, null], [12539, 17413, null], [17413, 23681, null], [23681, 28983, null], [28983, 34974, null], [34974, 38951, null], [38951, 42986, null], [42986, 49295, null], [49295, 55704, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55704, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55704, null]], "pdf_page_numbers": [[0, 1211, 1], [1211, 6759, 2], [6759, 12539, 3], [12539, 17413, 4], [17413, 23681, 5], [23681, 28983, 6], [28983, 34974, 7], [34974, 38951, 8], [38951, 42986, 9], [42986, 49295, 10], [49295, 55704, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55704, 0.02609]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
522fb6105bd9726a6edb96b015f98fae3ef124e9
|
Symbolic Range Analysis of Pointers
Vitor Paisante, Maroua Maalej, Leonardo Barbosa, Laure Gonnord, Fernando Magno Quintão Pereira
To cite this version:
HAL Id: hal-01228928
https://inria.hal.science/hal-01228928
Submitted on 25 Mar 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Distributed under a Creative Commons Attribution - NonCommercial 4.0 International License
Symbolic Range Analysis of Pointers
Vitor Paisante
Department of Computer Science
UFMG, Brazil
paisante@dcc.ufmg.br
Maroua Maalej
University of Lyon, France & LIP
(UMR CNRS/ENS Lyon/
UCB Lyon1/INRIA)
F-69000 Lyon, France
Maroua.Maalej@ens-lyon.fr
Leonardo Barbosa
Department of Computer Science
UFMG, Brazil
leob@dcc.ufmg.br
Laure Gonnord
Univ. Lyon1, France & LIP
(UMR CNRS/ENS Lyon/
UCB Lyon1/INRIA)
F-69000 Lyon, France
Laure.Gonnord@ens-lyon.fr
Fernando Magno Quintao Pereira
Department of Computer Science
UFMG, Brazil
fernando@dcc.ufmg.br
Abstract
Alias analysis is one of the most fundamental techniques that compilers use to optimize languages with pointers. However, in spite of all the attention that this topic has received, the current state-of-the-art approaches inside compilers still face challenges regarding precision and speed. In particular, pointer arithmetic, a key feature in C and C++, is yet to be handled satisfactorily. This paper presents a new alias analysis algorithm to solve this problem. The key insight of our approach is to combine alias analysis with symbolic range analysis. This combination lets us disambiguate fields within arrays and structs, effectively achieving more precision than traditional algorithms. To validate our technique, we have implemented it on top of the LLVM compiler. Tests on a vast suite of benchmarks show that we can disambiguate several kinds of C idioms that current state-of-the-art analyses cannot deal with. In particular, we can disambiguate 1.35x more queries than the alias analysis currently available in LLVM. Furthermore, our analysis is very fast: we can go over one million assembly instructions in 10 seconds.
Categories and Subject Descriptors D - Software [D.3 Programming Languages]: D.3.4 Processors - Compilers
General Terms Languages, Experimentation
Keywords Alias analysis, range analysis, speed, precision
1. Introduction
Pointer analysis is one of the most fundamental compiler technologies. This analysis lets the compiler distinguish one memory location from others; hence, it provides the necessary information to transform code that manipulates memory. Given this importance, it comes as no surprise that pointer analysis has been one of the most researched topics within the field of compiler construction [12]. This research has contributed to make the present algorithms more precise [10, 28], and faster [11, 22]. Nevertheless, one particular feature of imperative programming languages remains to be handled satisfactorily by the current state-of-the-art approaches: the disambiguation of pointer intervals.
Mainstream compilers still struggle to distinguish intervals within the same array. In other words, state-of-the-art pointer analyses often fail to disambiguate regions addressed from a common base pointer via different offsets, as explained by Yong and Horwitz [27]. Field-sensitive pointer analysis, provide a partial solution to this problem. These analyses can distinguish different fields within a record, such as a struct in C [17], or a class in Java [26]. However, they rely on syntax that is usually absent in the low level program representations adopted by compilers. Shape analyses [13, 21] can disambiguate subparts of data-structures such as arrays, yet their scalability remains an issue to be solved. Consequently, many compiler optimizations, such as
loop transformations, tiling, fission, skewing and interchanging \cite{25} Ch.09, are very limited in practice. Therefore, we claim that, to reach their full potential, compilers need to be provided with more effective alias analyses.
This paper describes such an analysis. We introduce an abstract domain that associates pointers with symbolic ranges. In other words, for each pointer \( p \) we conservatively estimate the range of memory slots that can be addressed as an offset of \( p \). We let \( GR(p) \) be the global abstract address set associated with pointer \( p \), such that if \( \text{loc}_i + [l, u] \in GR(p) \), then \( p \) may dereference any address from \( @((\text{loc}_i) + l) \) to \( @((\text{loc}_i) + u) \), where \( \text{loc}_i \) is a program site that contains a memory allocation call, and \( @((\text{loc}_i)) \) is the actual return address of the \texttt{malloc} at runtime. We let \( \{l, u\} \) be two symbols defined within the program code. Like the vast majority of pointer analyses available in the compiler literature, from Andersen’s work \cite{2} to the more recent technique of Zhang et al. \cite{28}, our method is correct if the underlying program is also correct. In other words, our results are sound with respect to the semantics of the program if this program has no undefined behavior, such as out-of-bounds accesses.
The key insight of this paper is the combination of pointer analysis with range analysis on the symbolic interval lattice. In a symbolic range analysis, ranges are defined as expressions of the program symbols, a symbol being either a constant or the name of a variable. There exist many approaches to symbolic range analyses in the literature \cite{4} \cite{15} \cite{19}. The algorithms that we present in this paper do not depend on any particular implementation. Nevertheless, the more precise the range analysis that we use, the more precise the analysis facts that we produce. In this work we have adopted the symbolic range analysis proposed in 1994 by William Blume and Rudolf Eigenmann \cite{4}.
To validate our ideas, we have implemented them in the LLVM compilation infrastructure \cite{14}. We have tested our pointer analysis onto three different benchmarks used in previous work related to pointer disambiguation: Prolangs \cite{20}, PtrDist \cite{29} and MallocBench \cite{9}. As we show in Section \cite{4} our analysis is linear on the size of programs. It can go over one-million assembly instructions in approximately 10 seconds. Furthermore, we can disambiguate 1.35x more queries than the alias analysis currently available in LLVM.
2. Overview
We have two different ways to answer the following question: “do pointers \texttt{tmp}_1 and \texttt{tmp}_2 alias?” These tests are called global and local. In this section, we will use two different examples to illustrate situations in which each query is more effective. These distinct strategies are complementary: one is not a superset of the other.
Global pointer disambiguation. Figure \cite{1} illustrates our first approach to disambiguate pointers. The figure shows a pattern typically found in distributed systems implemented in C. Messages are represented as arrays of bytes. In this particular example, messages have two parts: an identifier, which is stored in the beginning of the array, and a payload, which is stored right after. The loops in lines 5-8 and 9-12 fill up each of these parts with data. If a compiler can prove that the stores at lines 6 and 10 are always independent, then it can perform optimizations that would not be possible otherwise. For instance, it can parallelize the loops, or switch them, or merge them into a single body.
No alias analysis currently available in either gcc or LLVM is able to disambiguate the stores at lines 6 and 10. These analyses are limited because they do not contain range information. The range interval \([l, u]\) associated with a variable \( i \) is an estimate of the lowest \((l)\) and highest \((u)\) values that \( i \) can assume throughout the execution of the program. In this paper, we propose an alias analysis that solves this problem. To achieve this goal, we couple this alias analysis with range analysis on symbolic intervals \cite{4}. Thus, we will say that the store at line 6 might modify any address from \( p + 0 \) to \( p + N - 1 \), and that the store at line 10 might write on any address from \( p + N \) to \( p + N + \texttt{strlen}(m) - 1 \). For this purpose, we will use an abstract address that encodes the actual value(s) of \( p \) inside the \texttt{prepare} function. These memory addresses are depicted in Figure \cite{2} where each \( \square \) represents a memory slot.
Whole program analysis reveals that there are two candidate locations that any pointer in the program may refer to. These locations have been created at lines 17 and 18 of Figure \cite{1} and we represent them abstractly as \texttt{loc}_{17} and \texttt{loc}_{18}. These names are unique across the entire program.
1. \texttt{#include <stdlib.h>}
2. \texttt{void prepare(char* p, int N, char* m) \{}
3. \hspace{1em} \texttt{char* i, *e, *f;}
4. \hspace{1em} \texttt{for (i = p, e = p + N; i < e; i += 2) \{}
5. \hspace{2em} \texttt{*i = 0;}
6. \hspace{2em} \texttt{*(i + 1) = 0xFF;}
7. \hspace{1em} \texttt{\}}
8. \hspace{1em} \texttt{for (f = e + \texttt{strlen}(m); i < f; i++) \{}
9. \hspace{2em} \texttt{*i = *m;}
10. \hspace{2em} \texttt{m++;}
11. \hspace{1em} \texttt{\}}
12. \texttt{\}}
13. \texttt{int main(int argc, char** argv) \{}$
14. \hspace{1em} \texttt{int Z = \texttt{atoi}(argv[1]);}
15. \hspace{1em} \texttt{char* b = (char*)\texttt{malloc}(Z);}
16. \hspace{1em} \texttt{char* s = (char*)\texttt{malloc}(\texttt{strlen}(argv[2]));}
17. \hspace{1em} \texttt{strcpy(s, argv[2]);}
18. \hspace{1em} \texttt{prepare(b, Z, s);}
19. \hspace{1em} \texttt{...}
20. \hspace{1em} \texttt{return 0;}
21. \texttt{\}}
22. \texttt{\}
Figure 2. Array $p$ in the routine prepare seen in Figure 3. Lines 6 and 10 represent the different stores in the figure.
1 void accelerate
2 (float* $p$, float $X$, float $Y$, int $N$) {
3 int $i = 0$;
4 while ($i < N$) {
5 $p[i] += X$; // $float* tmp_0 = p + i$; *tmp_0 = ...;
6 $p[i + 1] += Y$; // $float* tmp_1 = p + i + 1$;
7 $i += 2$; // *tmp_1 = ...;
8 }
9 }
Figure 3. Program that shows the need to assign common names to addresses that spring from the same base pointer.
After running our analysis, we find out that the abstract state (GR) of $i$ at line 6 is $GR(1_{fn.6}) = \{loc_{i7} + [0, N - 1]\}$, and that the abstract state of $i$ at line 10 is $GR(1_{fn.10}) = \{loc_{17} + [N, N + strlen(m) - 1]\}$. Given that these two abstract ranges do not intersect, we know that the two stores update always different locations. We call this check the global disambiguation criterion.
Local pointer disambiguation. Figure 3 shows a program in which the simple intersection of ranges would not let us disambiguate pointers $tmp_0$ and $tmp_1$. After solving global range analysis for that program, we have that $GR(tmp_0) = \{loc_0 + [0, N + 1]\}$ and that $GR(tmp_1) = \{loc_0 + [1, N + 2]\}$, where $loc_0$ defines the abstract address of the function parameter $p$. The intersection of these ranges is non-empty for $N \geq 1$. Thus, the global check that we have used to disambiguate locations in Figure 1 does not work in Figure 3. Notwithstanding this fact, we know that $tmp_0$ and $tmp_1$ will never point to a common location. In fact, these pointers constitute different offsets from the same base address. To deal with this imprecision of the global check, we will be also discussing a local disambiguation criterion.
3. Combining Range and Pointer Analyses
We perform our pointer analysis in several steps. Figure 5 shows how each of these phases relates to each other. Our final product is a function that, given two pointers, $p_0$ and $p_1$, tells if they may point to overlapping areas or not. An invocation of this function is called a query. We use an off-the-shelf symbolic range analysis, e.g., à la Blume [4], to bootstrap our pointer analysis. By inferring the symbolic ranges of pointers, we have two alias tests: the global and the local approach. In the rest of this section we describe each one of these contributions.
3.1 A Core Language
We solve range analysis through abstract interpretation. To explain how we abstract each instruction in our intermediate representation, we shall use the language seen in Figure 6. Henceforth, we shall call this syntax our core language. We shall be working on programs in Extended Static Single Assignment (e-SSA) form [5]. E-SSA form is a flavor of Static Single Assignment (SSA) [8] form, with variable renaming after inequalities. Thus, our core language contains $\phi$-functions to ensure the single definition (SSA) property, and intersections to rename variables after conditionals. We assume that $\phi$-functions have only two arguments. Generalizing this notation to $n$-ary functions is immediate.
Author Version of « Symbolic Range Analysis of Pointers » Published in Code Generation and Optimization, Barcelona, 2016
```latex
\begin{align*}
\text{Integer constants} & := \{c_1, c_2, \ldots\} \\
\text{Integer variables} & := \{i_1, i_2, \ldots\} \\
\text{Pointer variables} & := \{p_1, p_2, \ldots\} \\
\text{Instructions (I)} & := \\
& \quad - \text{Allocate memory} \quad p_0 = \text{malloc}(i_0) \\
& \quad - \text{Free memory} \quad p_0 = \text{free}(p_1) \\
& \quad - \text{Pointer plus int} \quad p_0 = p_1 + i_0 \\
& \quad - \text{Pointer plus const} \quad p_0 = p_1 + c_0 \\
& \quad - \text{Bound intersection} \quad p_0 = p_1 \cap [l, u] \\
& \quad - \text{Load into pointer} \quad p_0 = \ell p_1 \\
& \quad - \text{Store from pointer} \quad *p_0 = p_1 \\
& \quad - \text{Allocate memory} \quad \ell p_0 = \phi(p_1 \cdot \ell f_1, p_2 : \ell f_2) \\
& \quad - \text{Branch if not zero} \quad \text{bnz}(v, \ell f) \\
& \quad - \text{Unconditional jump} \quad \text{jump}(\ell f)
\end{align*}
```
Figure 6. The syntax of our language of pointers.
Figure 7 shows the control flow graph of the program seen in Figure 1. The two allocations at lines 17 and 18 are associated respectively with loc0 and loc1.
3.3 Symbolic Range Analysis.
We start our pointer analysis by running an off-the-shelf range analysis parameterized on symbols. For the sake of completeness, we shall revisit the main notions associated with range analysis, which we borrow from Nazaré et al. [15]. We say that $E$ is a symbolic expression, if and only if, $E$ is defined by the grammar below. In this definition, $s$ is a symbol and $n \in \mathbb{N}$. The set of symbols $s$ in a program form its symbolic kernel. The symbolic kernel is formed by names that cannot be represented as function of other names in the program text. Concretely, this set contains the names of global variables and variables assigned with values returned from library functions.
$$E ::= n \mid s \mid \min(E, E) \mid \max(E, E) \mid E - E \mid E + E \mid E/E \mid E \mod E \mid E \times E$$
We shall be performing arithmetic operations over the partially ordered set $S = S_E \cup \{-\infty, +\infty\}$, where $S_E$ is the set of symbolic expressions. The partial ordering is given by $-\infty \prec \ldots \prec -2 \prec -1 \prec 0 \prec 1 \prec 2 \prec \ldots \prec +\infty$. There exists no ordering between two distinct elements of the symbolic kernel of a program. For instance, $N \prec N+1$ but there is no relationship between an expression containing $N$ and another expression containing $M$.
A symbolic interval is a pair $R = [l, u]$, where $l$ and $u$ are symbolic expressions. We denote by $R_l$ the lower bound $l$ and $R_u$ the upper bound $u$. We define the partially ordered set of (symbolic) intervals $S^2 = (S \times S, \sqsubseteq)$, where the ordering operator is defined as:
$$[l_0, u_0] \sqsubseteq [l_1, u_1], \text{ if } l_1 \leq l_0 \land u_1 \geq u_0$$
From the previous definitions, we define the semi-lattice SmbRanges of symbolic intervals as $(S^2, \sqsubseteq, \sqcup, 0, [-\infty, +\infty])$, where the join operator “$\sqcup$” is defined as:
$$[a_1, a_2] \sqcup [b_1, b_2] = [\min(a_1, b_1), \max(a_2, b_2)]$$
Our lattice has a least element $\emptyset$, such that:
$$\emptyset \sqcup [l, u] = [l, u] \sqcup \emptyset = [l, u]$$
and a greatest element \([-\infty, +\infty]\), such that:
\([-\infty, +\infty] \sqcup [l, u] = [l, u] \sqcup [-\infty, +\infty] = [-\infty, +\infty]
For sake of clarity, we also define the intersection operator \(\sqcap\):
\([a_1, a_2] \cap [b_1, b_2] = \begin{cases} [0, \min(a_1, b_1)], & \text{if } a_2 < b_1 \text{ or } b_2 < a_1 \\ [\max(a_1, b_1), \min(a_2, b_2)], & \text{otherwise} \end{cases}
\([-\infty, +\infty]\) is absorbant and \(\emptyset\) is neutral for \(\sqcap\).
The result of range analysis is a function \(R : V \mapsto S^2\), that maps each integer variable \(i\) in a program to an interval \([l, u], l \leq u\), e.g., \(R(i) = [l, u]\). The more precise the technique we use to produce this result, the more precise our results will be. Nevertheless, the exact implementation of the range analysis is immaterial for the formalization that follows. In this paper, we are using the following widening operator on SymbRanges:
\([l, u] \triangledown [l', u'] = \begin{cases} [l, u] & \text{if } l = l' \text{ and } u = u' \\ [l, +\infty) & \text{if } l = l' \text{ and } u' > u \\ [l', -\infty) & \text{if } l' < l \text{ and } u' = u \\ [-\infty, +\infty) & \text{if } l' < l \text{ and } u' > u \end{cases}\)
The only requirement that we impose on the implementation of range analysis is that it exists over SymbRanges, our lattice of symbolic intervals.
We denote by \(\langle \alpha \rangle_{\text{SymbRanges}}, \gamma_{\text{SymbRanges}}\) the underlying galois connection.
**Example 2.** A range analysis such as Nazaré et al.‘s [13] if applied onto the program seen in Figure 3 will give us that \(R(i_{\text{line 3}}) = [0, 0], R(i_{\text{line 5}}) = [0, N - 1], R(i_{\text{line 7}}) = [0, N + 1]\).
**3.4 Global Range Analysis of Pointers**
As we have mentioned in Section 2 we use two different strategies to disambiguate pointers: the global and the local test. Our global pointer analysis goes over the entire code of the program, associating variables that have the pointer type with elements of an abstract domain that we will define soon. The local analysis, on the other hand, works only for small regions of the program text. We shall discuss the local test in Section 3.6. In this section, we focus on the global test, which is an abstract-interpretation based algorithm.
**An Abstract Domain of Pointer Locations.** We associate pointers with tuples of size \(n\): \((\text{SymbRanges} \cup \bot)^n\); \(n\) being the number of program sites where memory is allocated (the cardinal of \(\text{Loc}\)) and \(\cup\) is the disjoint union.
Let \(\oplus(p_{oc})\) denotes the actual address value returned by the \(i^{th}\) malloc of the program. By construction, all actual addresses are supposed to be offsets of a given \(\oplus(p_{oc})\). The abstract value \(\text{GR}(p) = \{p_0, \ldots, p_{n-1}\}\) represents (an abstract version) of the set of memory locations that pointer variable \(p\) can address throughout the execution of a program:
\[\text{GR}(p) = \{p_0, \ldots, p_{n-1}\}\]
\[\text{GR}(p) = \{p_0, \ldots, p_{n-1}\}\]
\[\text{GR}(p) = \{p_0, \ldots, p_{n-1}\}\]
**Figure 8.** The concrete semantics of \(SymbRanges\) is depicted in the SymbRanges lattice.
The goal of our GR analysis is to compute such an abstract value for each pointer of the program. Some elements in a tuple \(GR(p)\) are bound to the undefined location, e.g., \(\bot\). These elements are not interesting to us, as they do not encode any useful information. Thus, to avoid keeping track of them, we rely on the concept of support, which we state in Definition 2.
**Definition 2 (Support).** We denote by \(\text{supp}_{GR}(p)\) the set of indexes for which \(p_i\) is not \(\bot\):
\[\text{supp}_{GR}(p) = \{i \mid p_i \neq \bot\}\]
For sake of readability, let us denote for instance \(GR(p) = (\bot, \bot, \bot, \bot, \bot)\), by the set \(GR(p) = \{\text{loc}_1 + \text{loc}_3, \text{loc}_3 + \text{loc}_3 + \text{loc}_5\}\). In the concrete world, this notation will mean that pointer \(p\) can address any memory location from \(\oplus(p_{loc_1}) + \text{loc}_1\) to \(\oplus(p_{loc_1}) + \bot\) and from \(\oplus(p_{loc_3}) + \bot\) to \(\oplus(p_{loc_3}) + \text{loc}_3\).
For instance, consider that \(l_1 = 3, u_1 = 5, l_3 = 3\) and \(u_3 = 8\). \(GR(p) = \{\text{loc}_1 + [3, 5], \text{loc}_3 + [3, 8]\}\) is then depicted in Figure 8.
Now for the abstract operations: \((\bot, \ldots, \bot)\) is the least element of our lattice, and \([[-\infty, \infty], \ldots, [-\infty, \infty]]\) the greatest one.
Given the two abstract values \(GR(p^1) = (p_0^1, \ldots, p_{n-1}^1)\) and \(GR(p^2) = (p_0^2, \ldots, p_{n-1}^2)\), the union \(GR(p^1) \cup GR(p^2)\) is the tuple \((q_0, \ldots, q_{n-1})\) where:
\[q_i = \begin{cases} \bot & \text{if } p_i^1 = p_i^2 = \bot \\ p_1^i \cup p_2^i & \text{else} \end{cases}\]
and \(GR(p^1) \subseteq GR(p^2)\) if an only if all involved (symbolic) intervals of \(p^1\) are included in the ones of \(p^2\): \(\forall i \in \{0, \ldots, n - 1\}, p_i^1 \subseteq p_i^2\) (considering \(\bot \subseteq R\) and \(\bot \cup R = R\) for all non-empty intervals). We call this structure, formed by \((\text{SymbRanges} \cup \bot)^n\), plus its partial ordering the lattice MemLocs.
EXAMPLE 3. For the example depicted in Figure[7] where we only have two malloc sites denoted by loc₀ and loc₁, we will obtain the following results: \( GR(p) = GR(b) = \{ loc₀ + [0, 0] \}, \) \( GR(m₀) = GR(s) = \{ loc₁ + [0, 0] \}, \) \( GR(c) = \) \( loc₀ + [N, N], \) \( GR(m₁) = loc₁ + [1, +\infty], \) \( GR(i₇) = loc₀ + [N + strlen(m₀), N + strlen(m₀) + 1]. \) How this mapping is found is discussed in the rest of this section.
Abstract semantics for \( GR, \) and concretisation. The abstract semantics of each instruction in our core language is given by Figure[9]. Figure[9] defines a system of equations whose fixed point gives us an approximation on the locations that each pointer may dereference. We remind the reader of our notation: \( [l, u]ₗ = l, \) and \( [l, u]ᵣ = u. \) In Figure[9] this notation surfaces in the semantics of intersections. The abstract interpretation of the pointer-related instructions in Figure[7] yields the results discussed in Example[3].
\[
j : p = \text{malloc}(v) \\
wth \ v \text{ scalar} \quad \Rightarrow \quad GR(p) = (\perp, \ldots, [0, 0], \ldots) \quad j\text{th component}
\]
\[
p = \text{free}(v) \\
wth \ v \text{ scalar} \quad \Rightarrow \quad GR(p) = (\perp, \ldots, \perp)
\]
\[
v = v₁ \quad \Rightarrow \quad GR(v) = GR(v₁)
\]
\[
q = p + c \quad wth \ c \text{ scalar variable} \quad \Rightarrow \quad GR(q) = GR(q₀, q₁, \ldots, qₙ₋₁) \text{ with } \begin{cases} q₁ = \perp \text{ if } p₁ = \perp \\ p₁ + R(c) \text{ else} \end{cases}
\]
\[
q = \phi(p₁, p₂) \quad \Rightarrow \quad GR(q) = GR(p₁) \sqcup GR(p₂)
\]
\[
q = p₁ \cap [\infty, p₂] \quad \Rightarrow \quad \begin{cases} q₁ = \perp \text{ if } p₁ = \perp \text{ or } p₂ = \infty \\ p₁ \cap [\infty, p₂] \text{ else} \end{cases}
\]
\[
q = p₁ \cap [p₂, +\infty] \quad \Rightarrow \quad \begin{cases} q₁ = \perp \text{ if } p₁ = \perp \text{ or } p₂ = +\infty \\ p₁ \cap [p₂, +\infty] \text{ else} \end{cases}
\]
\[
q = *p \quad GR(q) = ([\infty, \infty], \ldots, [\infty, \infty])
\]
\[
*pq = p \quad \Rightarrow \quad \text{Nothing}
\]
Figure 9. Constraint generation for \( GR \) with \( GR(p) = (p₀, \ldots, pₙ₋₁) \) given \( p \) in the right hand side of rules
There remains to define how the abstract states will be concretised \( (\alpha(\text{loc}_i) \) is the actual address returned by the \( i \text{th} \) malloc):
DEFINITION 3 (Concretisation). Given \( GR(p) \) an abstract value (a set of “abstract addresses for \( p \)”), denoted by \( GR(p) = (p₀, \ldots, pₙ₋₁) \) then we define its concretisation as follows:
\[
\gamma(GR(p)) = \bigcup_{i \in supp_{GR}(p)} \{ \alpha(\text{loc}_i) + 0, 0 \in p_i \}
\]
The concretisation function of this abstract value is thus a set of (concrete) addresses, obtained by shifting a set of base addresses by a certain value in \( \text{SymbRanges} \).
PROPOSITION 1. \((\alpha, \gamma)\) is a Galois connection.
PROOF. Immediate since \((\alpha \text{SymbRanges}, \gamma \text{SymbRanges})\) is a galois connection.
Solving the abstract system of constraints Following the abstract interpretation framework, we solve our system of constraints by computing for each pointer a growing set of abstract values until convergence.
However, as the underlying lattice \( \text{SymbRanges} \) has infinite height, widening is necessary to ensure that these sequence of iterations actually terminate. Our widening operation on pointers generalizes the widening operation on ranges. It is defined as follows:
DEFINITION 4. Given \( GR(p) \) and \( GR(p') \) with \( GR(p) \sqsubseteq GR(p') \), we define the widening operator:
\[
GR(p) \triangledown GR(p') = (p₀ \triangledown p₀', \ldots, pₙ₋₁ \triangledown p'ₙ₋₁),
\]
where \( \triangledown \) denotes the widening on \( \text{SymbRanges} \), extended with \( \perp \triangledown = \perp \) and \( \perp \triangledown [l, u] = [l, u] \).
As usual, we only apply the widening operator on a cut set of the control flow graphs (here, only on \( \phi \) functions).
Widening may lead our interpreter to produce very imprecise results. To recover part of this imprecision, we use a descending sequence of finite size: after convergence, we redo a step of symbolic evaluation of the program, starting from the value obtained after convergence. One example of analysis will be detailed later, in Section[3.5]
The abstract interpretation of loads and stores. In Figure[9] we chose not to track precisely the intervals associated with pointers stored in memory. In other words, when interpreting stores, e.g., \( q = *p \), we assign the top value of our lattice to \( q \). This decision is pragmatic. As we shall explain in Section[4], a typical compilation infra-structure already contains analyses that are able to track the propagation of pointer information throughout memory. Our goal is not to solve this problem. We want to deliver a fast analysis that is precise enough to handle C-style pointer arithmetic.
3.5 Answering GR Queries
Our queries are based on the following result, that is an immediate consequence of the fact our analysis is an abstract interpretation:
PROPOSITION 2 (Correctness). Let \( p \) and \( p' \) be two pointers in a given program then :
\[
\text{if } \supp_{GR}(p) \cap \supp_{GR}(p') = \emptyset \quad \text{or} \quad \forall i \in \supp_{GR}(p) \cap \supp_{GR}(p'), pᵢ \cap pᵢ' = \emptyset
\]
1While speaking about symbolic ranges, we also have to concretize the values involved in the bounds of \( p_i \), that is we shall use the actual values between \( S(pᵢ) \) and \( S(pᵢ') \).
then $\gamma(GR(p)) \cap \gamma(GR(p')) = \emptyset$.
In other words, if the abstract values of two different pointers of the program have a null intersection, then the two concrete pointers do not alias. This result is directly implied by the abstract interpretation framework. Thanks to this result, we implement the query $Q_{GR}(p, p')$ as:
- If $GR(p)$ and $GR(p')$ have an empty intersection, then “they do not alias”.
- Else “they may alias”.
### 3.6 Local Range Analysis of Pointers
The global pointer analysis is not path sensitive. As a consequence, this analysis cannot, for instance, distinguish the effects of different iterations of a loop upon the actual value of a pointer, or the effects of different branches of a conditional test on that very pointer. The program in Figure 10 illustrates this issue. Pointers $a_2$ and $a_3$ clearly must not alias. Yet, their abstract states have non-empty intersections for $loc_1$. Therefore, the query mechanism of Section 3.5 would return a “may-alias” result in this case.
To solve this problem, we have developed a local version of our pointer analysis. We call it local because it creates new locations for every $\phi$-function. Our local range analysis is simpler than its global counterpart. We solve it in a single iteration of abstract interpretation applied on the instructions of our core language. Instructions are evaluated abstractly in the order given by the program’s dominance tree. Figure 11 gives the abstract semantics of each instruction.
The abstract value $LR(p)$ exists in $(Loc \cup NewLocs) \times SymbRanges$ where $Newloc$ denotes a set of “fresh location variables”, that are computed by invocation of the function $NewLocs()$. As before, we write $loc + R$ instead of $(loc, R)$. Similarly to $\gamma_{GR}$, $\gamma_{LR}$ denotes the set of abstract addresses from $@((loc) + R_1 \to @(loc) + R_1$.
To find a solution to the local analysis, we solve the system provided by the abstract rules seen in Figure 11. This resolution process involves computing an increasing sequence of abstract values for each pointer $p$ of the program. Contrary to the global analysis, this analysis is based on a finite lattice, we do not need any widening operator. Figure 10 (Right) shows the result of the local analysis. Contrary to the global analysis, we have a new location bound to variable $a_3$, which is defined by a $\phi$ operator. The range of this new location is $[0, 0]$. The other variables that are functions of $a_3$, e.g., $a_4$ and $a_5$, have now non-intersection ranges associated with this new memory name.
### 3.7 Answering LR Queries
The correction for the local analysis is stated by the following proposition:
**Proposition 3** (Correctness). Let $p$ and $p'$ be two pointers in a given program, and $\gamma_{LR}$ be the concretization of the abstract map $LR$, which we state like in Definition 3. If $LR(p) = loc + R$ and $LR(p') = loc' + R'$, then if $loc = loc'$ and $R \cap R' = \emptyset$ then $\gamma(LR(p)) \cap \gamma(LR(p')) = \emptyset$. In other words, $p$ and $p'$ never alias.
Thanks to this result, we implement the query $Q_{LR}(p, p')$:
- If $LR(p)$ and $LR(p')$ have a common base pointer with ranges that do not intersect, then “they do not alias”.
- Else “they may alias”.
### 3.8 Complexity
The e-SSA representation ensures that we can implement our analysis sparsely. Sparsity is possible because the e-SSA form renames variables at each program point where new abstract information, e.g., ranges of integers and pointers, arises. According to Tavares et al. [24], this property – single information – is sufficient to enable sparse implementation of non-relational static analyses [24]. Therefore, the abstract state of each variable is invariant along the entire live range of that variable. Consequently, the space complexity of our static analysis is $O(|V| \times I)$, where $V$ is the set of names of variables in the program in e-SSA form, and $I$ is a measure of the size of the information that can be bound to each variable.
We apply widening after one iteration of abstract interpretation. Thus, we let the state of a variable to change first...
from $[\bot, \bot]$ to $[s_t, s_u]$, where $s_t \neq -\infty$, and $s_u \neq +\infty$. From there, we can reach either $[-\infty, s_u]$ or $[s_t, +\infty]$. And, finally, this abstract state can jump to $[-\infty, +\infty]$. Hence, our time complexity is $O(3 \times |V|) = O(|V|)$. This observation also prevents our algorithm from generating expressions with very long chains of “min” and “max” expressions. Therefore, $I$, the amount of information associate with a variable, can be represented with $O(1)$ space. As a consequence of this frugality, our static analysis runs in $O(|V|)$ time, and requires $O(|V|)$ space.
### 3.9 A wrap-up Example
Example 4 shows how our analysis works on the program seen in Figure 1.
**Example 4.** Figure 7 shows the control flow graph (CFG) of the program in Figure 7. Our graph is in e-SSA form. Figure 12 shows the result of widening ranges after one round of abstract interpretation (stabilization achieved), and a descending sequence of size two. Our system stabilizes after each instruction is visited four times. The first visit does initialization, the second widening (and stabilization check), and the last two build the descending sequence.
This example illustrates the need of widening to ensure termination. Our program has a cycle of dependencies between pointers $i_1, i_2$ and $i_3$. If not for widening, pointer $i_3$, incremented in line 5 of Figure 1, would grow forever. Thus, as in Abstract Interpretation, we must break the cyclic dependencies between our pointers under analysis, by means of insertion of widening points (identify points in the CFG where to apply widening to insure convergence).
Returning to our example of Figure 1, we are interested in knowing, for instance, that the memory access at line 6 is independent on the accesses that happen at line 10. To achieve this goal, we must bound the memory regions covered by pointers $i_3$ and $i_7$. A cyclic dependence happens at the operation $i++$, because in this case, we have a pointer being used as both, source and destination of the update. Thus, we should have inserted a widening point at stores and load instructions. However, in the Abstract Interpreter depicted in Figure 1, it was sufficient to insert widening points at $\phi$ functions (as we already said before) because:
- heads of loops are $\phi$ functions (thus dependencies between variables of different iteration of loops are broken).
- we are working on (e-)SSA form programs; thus, the only inter-loop dependencies are successive stores to the same variable: $*q\ldots, *q\ldots$. The value GR($q$) is the union of all information gathered inside the loop. (In essence, memory addresses are not in static single assignment form, i.e., we could have the same address being used as the target of a store multiple times). This information might grow forever; hence, we would have inserted a widening point on the last write. In our case, the information we store is already the top of our lattice; hence, there is no need for widening.
<table>
<thead>
<tr>
<th>Var</th>
<th>GR</th>
<th>LR</th>
</tr>
</thead>
<tbody>
<tr>
<td>$b, p, t_0$</td>
<td>$[0, 0, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$m_0$, s</td>
<td>$[\bot, 0, 0]$</td>
<td>$loc_1 + 0, 0$</td>
</tr>
<tr>
<td>$i_1$</td>
<td>$[0, 0, \bot]$</td>
<td>$loc_2 + 0, 0$</td>
</tr>
<tr>
<td>$i_2$</td>
<td>$[0, 0, \bot]$</td>
<td>$loc_2 + 0, 0$</td>
</tr>
<tr>
<td>$t_0$</td>
<td>$[1, 1, \bot]$</td>
<td>$loc_2 + 1, 1$</td>
</tr>
<tr>
<td>$\epsilon$</td>
<td>$[N, N, \bot]$</td>
<td>$loc_0 + [N, N]$</td>
</tr>
<tr>
<td>$i_3$</td>
<td>$[1, 1, \bot]$</td>
<td>$loc_2 + 1, 1$</td>
</tr>
<tr>
<td>$i_4$</td>
<td>$[\bot, \bot]$</td>
<td>$loc_2 + 1, 1$</td>
</tr>
<tr>
<td>$f$</td>
<td>$([k, k], \bot]$</td>
<td>$loc_0 + [k, k]$</td>
</tr>
<tr>
<td>$m_1$</td>
<td>$[\bot, 0, 0]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$m_2$</td>
<td>$[\bot, 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$m_3$</td>
<td>$[\bot, 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$m_4$</td>
<td>$[\bot, 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$m_5$</td>
<td>$[\bot, 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_1$</td>
<td>$[0, +\infty, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_2$</td>
<td>$[0, +\infty, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_3$</td>
<td>$[1, +\infty, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_4$</td>
<td>$[1, +\infty, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_5$</td>
<td>$[N, +\infty, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_6$</td>
<td>$[N, k_1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_7$</td>
<td>$[N + 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_8$</td>
<td>$[0, N - 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_9$</td>
<td>$[1, N, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_10$</td>
<td>$[1, N, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_11$</td>
<td>$[N, N, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_12$</td>
<td>$[N, k, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_13$</td>
<td>$[N, k + 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_14$</td>
<td>$[N, N, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_15$</td>
<td>$[N, k, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_16$</td>
<td>$[N, k + 1, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
<tr>
<td>$t_17$</td>
<td>$[N, N, \bot]$</td>
<td>$loc_0 + 0, 0$</td>
</tr>
</tbody>
</table>
Figure 12. Abstract interpretation of CFG seen in Figure 7 (program in Figure 1). For GR, we associate $loc_0$ with the malloc at line 17 and $loc_1$ with the malloc at line 18 (of the program). Only changes in GR and LR are rewritten after the growing and descending iterations. We let $k = N + strlen(m_0)$.
### 4. Experiments
We have implemented our range analysis in the LLVM compiler, version 3.5. In this section, we show a few numbers that we have obtained with this implementation. All our experiments have been performed on an Intel i7-4770K, with 8GB of memory, running Ubuntu 14.04.2. Our goal with these experiments is to show: (i) that our alias analysis is more precise than other alternatives of practical runtime; and (ii) that it scales up to large programs.
**On the Precision of our Analysis.** In this section, we compare our analysis against the other pointer analysers that are available in LLVM 3.5, namely basic and SCEV. The first of them, although called “basic”, is currently the most effective alias analysis in LLVM, and is the default choice at the -O3 optimization level. It relies on a number of heuristics to disambiguate pointer.
---
2 This list has been taken from the LLVM documentation, available at http://llvm.org/docs/AliasAnalysis.html in September of 2015.
Author Version of « Symbolic Range Analysis of Pointers » Published in Code Generation and Optimization, Barcelona, 2016
• Distinct globals, stack allocations, and heap allocations can never alias.
• Globals, stack allocations, and heap allocations never alias the null pointer.
• Different fields of a structure do not alias.
• Indexes into arrays with statically differing subscripts cannot alias.
• Many common standard C library functions never access memory or only read memory.
• Function calls cannot reference stack allocations which never escape from the function that allocates them.
As we see from the above list, the basic alias analysis has some of the capabilities of the technique that we present in this paper, namely the ability to distinguish fields and indices within aggregate types. In this case, such disambiguation is only possible when the aggregates are indexed with constants known at compilation time. For situations when these indices are symbols, LLVM relies on a second kind of analysis to perform the disambiguation: the “scalar-evolution-based” (SCEV) alias analysis. This analysis tries to infer closed-form expressions to the induction variables used in loops. For each loop such as:
\[
\text{for } (i = B; i < N; i += S) \{ \ldots a[i] \ldots \}
\]
this analysis associates variable \(i\) with the expression \(i = B + \text{iter} \times S, i \leq N\). The parameter \text{iter} represents the current iteration of the loop. With this information, SCEV can track the ranges of indices which dereference a within the loop. Contrary to our analysis, SCEV is only effective to disambiguate pointers accessed within loops and indexed by variables in the expected closed-form.
Figure 13 shows how the three different analyses fare when applied on larger benchmarks. For this experiment we have chosen three benchmarks that have been used in previous work that compares pointer analyses: Prolangs [20], PtrDist [29] and MallocBench [9]. We first notice that in general all the pointer analyses in LLVM disambiguate a relatively low number of pointers. This happens because many pointers are passed as arguments of functions, and, not knowing if these functions will be called from outside the program, the analyses must, conservatively, assume that these parameters may alias. Second, we notice that our pointer analysis is one order of magnitude more precise than the scalar-evolution based implementation available in LLVM. Finally, we notice that we are able to disambiguate more queries than the basic analysis. Furthermore, our results complements it in non-trivial ways. In total, we tried to disambiguate 3.093 million pairs of pointers. Our analysis found out that 1.29 million pairs reference non-overlapping regions. The basic analysis has been able to distinguish 953 thousand pairs. By combining these two analyses, we extended this number to 1.439 thousand pairs of pointers. SCEV could not increase this number any further.
Figure 14 shows the proportion of queries that we have been able to disambiguate with the global test of Section 3.4.
Figure 13. Comparison between three different alias analyses. We let \(r+b\) be the combination of our technique and the basic alias analysis of LLVM. Numbers in scev, basic, rbaa and \(r+b\) show percentage of queries that answer “no-alias”.
The two columns noalias of Figure 14 correspond to the percentage in column \%rbaa applied on the column \#Queries of Figure 13. Overall, the global test has given us 239,008, out of 1,290,457 “no-alias” answers. This corresponds to 18.52% of all the pairs of pointers that we have disambiguated. We did not show the local test in this table because these two tests are not directly comparable. The global test disambiguate pointers, and the local test disambiguate the addresses used in instructions such as loads and stores. These instructions can use pointers that might dereference overlapping regions; however, not at the same moment dur-
Figure 15. Runtime of our analysis for the 50 largest benchmarks in the LLVM test suite. Each point on the X-axis represents a different benchmark. Benchmarks are ordered by size. This experiment took less than 10 seconds.
On the Scalability of our Analysis. The chart in Figure 15 shows how our analysis scales when applied on programs of different sizes. We have used the 50 largest programs in the LLVM benchmark suite. These programs gave us a total of 800,720 instructions in the LLVM intermediate representation, and a total of 241,658 different pointer variables. We analyzed all these 50 programs in 8.36 seconds. We can—effectively—analyze 100,000 instructions in about one second. In this case, we are counting only the time to map variables to values in SymbRanges. We do not count the time to query each pair of pointers, because usually compiler optimizations perform these queries selectively, for instance, only for pairs of pointers within a loop. Also, we do not count the time to run the out-of-the-box implementation of range analysis mentioned in Section 3.3 because our version of it is not implemented within LLVM. It runs only once, and we query it afterwards, never having to re-execute it.
The chart in Figure 15 provides strong visual indication of the linear behavior of our algorithm. We have found, indeed, cogent evidence pointing in this direction. The linear correlation coefficient ($R$) indicates how strong is a linear relationship between two variables. The closer to one, the more linear is the correlation. The linear correlation between time and number of instructions for the programs seen in Figure 15 is 0.982, and the correlation between time and number of pointers is 0.975.
5. Related Work
The contribution of this work is a new representation of pointers, based on the SymbRanges lattice, and an algorithm to reach a fixed point in this lattice, based on abstract interpretation. This contribution complements classic work on pointer analysis. In other words, our representation of pointers can be used to enhance the precision of algorithms such as Steensgaard’s [23], Andersen’s [2], or even the state-of-the-art technique of Hardekopf and Lin [11]. These techniques map pointers to sets of locations, but they could be augmented to map pointers to sets of locations plus ranges. Furthermore, the use of our approach does not prevent the employment of acceleration techniques such as lazy cycle detection [10], or wave propagation [18]. There exist previous work that used similar lattices as ours, albeit different resolution algorithms. For instance, much of the work on automatic parallelization has some way to associate symbolic offsets, usually loop bounds, with pointers. Michael Wolfe [25, Ch.7] and Aho et al. [1, Ch.11] have entire chapters devoted to this issue. The key difference between our work and this line of research is the algorithm to solve pointer relations: they resort to integer linear programming (ILP) or the Greatest Common Divisor test to solve diophantine equations, whereas we do abstract interpretation. Even Rugina and Rinard [19], who we believe is the state-of-the-art approach in the field today, use integer linear programming to solve symbolic relations between variables. We speculate that the ILP approach is too expensive to be used in large programs; hence, concessions must be made for the sake of speed. For instance, whereas the previous literature that we know restrict their work to pointers within loops, we can analyze programs with over one million assembly instructions in a few seconds.
There exist work that, like ours, also associates intervals with pointers, and solves static analysis via abstract interpretation techniques. However, to the best of our knowledge, these approaches have a fundamental difference to our work: they use integer intervals a la Cousot [7], whereas we use symbolic intervals. The inspiration for much of this work springs from Balakrishnan and Reps notion of Value Set Analysis [3]. Integer intervals have also being used by Yong et al. [27] and, more recently, by Oh et al. [16]. In the latter case, Oh et al. use pointer disambiguation incidentally, to demonstrate their ability to implement efficiently static analyses in a context-sensitive way. Even though integer ranges fit well the need of machine code, as demonstrated by Balakrishnan and Reps, we believe that further precision requires more expressive lattices. We have not implemented value set analysis, but we have tried a simple experiment: we counted the number of pointers that have integer ranges, and compared this number against the quantity of pointers that have symbolic ranges. We found out that 20.47% of the pointers in our three benchmark suites have exclusively symbolic ranges. Classic range analysis would not be able to distinguish them. Notice that numeric ranges are more common among pointer variables than among integer, because fields within structs—a very common construct in C—are indexed through integers. Finally, the fact that we use
Bodik’s e-SSA form distinguish our abstract interpretation algorithm from previous work. This representation lets us solve our analysis sparsely, whereas Balakrishnan’s algorithm works on a dense representation that associates facts with pairs formed by variables and program points.
6. Conclusion
In this paper we have presented a new alias analysis technique that handles, within the same theoretical framework, the subtleties of pointer arithmetic and memory indirection. Our technique can disambiguate regions within arrays and C-like structs using the same abstract interpreter. We have achieved precision in our algorithm by combining alias analysis with classic range analysis on the symbolic domain. Our analysis is fast, and handles cases that the implementations of pointer analyses currently available in LLVM cannot deal with. In future work, we plan to investigate better splitting strategies and other more expressive lattices to improve the global precision of our analyses.
Acknowledgment: This project is supported by the Brazilian Ministry of Science and Technology through CNPq, the Intel Corporation (the ISRA eCoSoC project) and the INRIA-FAPEMIG cooperation grant (The Prospiel project).
References
|
{"Source-Url": "https://inria.hal.science/hal-01228928/file/CGO16_RangeAlias_hal_authorversion-with-numbers.pdf", "len_cl100k_base": 13597, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 48598, "total-output-tokens": 15839, "length": "2e13", "weborganizer": {"__label__adult": 0.0004017353057861328, "__label__art_design": 0.0003399848937988281, "__label__crime_law": 0.0003190040588378906, "__label__education_jobs": 0.0004487037658691406, "__label__entertainment": 6.633996963500977e-05, "__label__fashion_beauty": 0.00018143653869628904, "__label__finance_business": 0.0001888275146484375, "__label__food_dining": 0.0004322528839111328, "__label__games": 0.0006594657897949219, "__label__hardware": 0.0014362335205078125, "__label__health": 0.0005998611450195312, "__label__history": 0.0002722740173339844, "__label__home_hobbies": 9.79304313659668e-05, "__label__industrial": 0.0004248619079589844, "__label__literature": 0.00028228759765625, "__label__politics": 0.00032806396484375, "__label__religion": 0.0006213188171386719, "__label__science_tech": 0.023529052734375, "__label__social_life": 7.796287536621094e-05, "__label__software": 0.00405120849609375, "__label__software_dev": 0.9638671875, "__label__sports_fitness": 0.00035572052001953125, "__label__transportation": 0.0006570816040039062, "__label__travel": 0.00023055076599121096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52193, 0.04138]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52193, 0.42527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52193, 0.82618]], "google_gemma-3-12b-it_contains_pii": [[0, 1123, false], [1123, 4507, null], [4507, 10472, null], [10472, 13598, null], [13598, 16938, null], [16938, 22211, null], [22211, 27769, null], [27769, 31960, null], [31960, 37811, null], [37811, 41791, null], [41791, 46851, null], [46851, 52193, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1123, true], [1123, 4507, null], [4507, 10472, null], [10472, 13598, null], [13598, 16938, null], [16938, 22211, null], [22211, 27769, null], [27769, 31960, null], [31960, 37811, null], [37811, 41791, null], [41791, 46851, null], [46851, 52193, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52193, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52193, null]], "pdf_page_numbers": [[0, 1123, 1], [1123, 4507, 2], [4507, 10472, 3], [10472, 13598, 4], [13598, 16938, 5], [16938, 22211, 6], [22211, 27769, 7], [27769, 31960, 8], [31960, 37811, 9], [37811, 41791, 10], [41791, 46851, 11], [46851, 52193, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52193, 0.10092]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
5c777a667590f49d8357a396eafe3cf16037314c
|
A Scheme of Model Verification of the Concurrent Discrete Wavelet Transform (DWT) for Image Compression
Kamrul Hasan Talukder and Koichi Harada
Abstract—The scientific community has invested a great deal of effort in the fields of discrete wavelet transform in the last few decades. Discrete wavelet transform (DWT) associated with the vector quantization has been proved to be a very useful tool for the compression of image. However, the DWT is very computationally intensive process requiring innovative and computationally efficient method to obtain the image compression. The concurrent transformation of the image can be an important solution to this problem. This paper proposes a model of concurrent DWT for image compression. Additionally, the formal verification of the model has also been performed. Here the Symbolic Model Verifier (SMV) has been used as the formal verification tool. The system has been modeled in SMV and some properties have been verified formally.
Keywords—Computation Tree Logic, Discrete Wavelet Transform, Formal Verification, Image Compression, Symbolic Model Verifier.
I. INTRODUCTION
The research in compression techniques has stemmed from the ever increasing need for efficient data transmission, storage and utilization of hardware resources. Uncompressed image data require considerable storage capacity and transmission bandwidth. Despite rapid progresses in mass storage density, processor speeds and digital communication system performance demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia based applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to signal storage and digital communication technology.
Compressing an image is significantly different from compressing raw binary data. Of course, general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. Lossless compression involves with compressing data which, when decompressed, will be an exact replica of the original data. This is the case when binary data such as executables documents etc. are compressed. They need to be exactly reproduced when decompressed. On the other hand, images need not be reproduced 'exactly'. An approximation of the original image is enough for most purposes, as long as the error between the original and the compressed image is tolerable.
The neighboring pixels of most of the images are highly correlated and therefore hold redundant information from certain perspective of view [1]. The foremost task then is to find out less correlated representation of the image. Image compression is actually the reduction of the amount of this redundant data (bits) without degrading the quality of the image to an unacceptable level [2] [3] [4]. There are mainly two basic components of image compression - redundancy reduction and irrelevancy reduction. The redundancy reduction aims at removing duplication from the signal source image while the irrelevancy reduction omits parts of the signal that is not noticed by the signal receiver i.e., the Human Visual System (HVS) [5] which presents some tolerance to distortion, depending on the image content and viewing conditions. Consequently, pixels must not always be regenerated exactly as originated and the HVS will not detect the difference between original and reproduced images.
The current standards for compression of still image (e.g., JPEG) use Discrete Cosine Transform (DCT), which represents an image as a superposition of cosine functions with different discrete frequencies [6]. The DCT can be regarded as a discrete time version of the Fourier Cosine series. It is a close relative of Discrete Fourier Transform (DFT), a technique for converting a signal into elementary frequency components. Thus, DCT can be computed with a Fast Fourier Transform (FFT) like algorithm of complexity $O(n \log_2 n)$.
More recently, the wavelet transform has emerged as a cutting edge technology within the field of image analysis. The wavelet transformations have a wide variety of different applications in computer graphics including radiosity [7], multiresolution painting [8], curve design [9], mesh optimization [10], volume visualization [11], image searching [12] and one of the first applications in computer graphics,
image compression. The Discrete Wavelet Transformation (DWT) provides adaptive spatial frequency resolution (better spatial resolution at high frequencies and better frequency resolution at low frequencies) that is well matched to the properties of an HVS.
This paper proposes a technique of concurrent DWT based image compression which has also been formally verified. Simulation and testing [13] are some of the traditional approaches for verifying the systems. Simulation and testing both involve making experiments before deploying the system in the field. While simulation is performed on an abstraction or a model of the system, testing is performed on the actual product. In both cases, these methods typically inject signals at certain points in the system and observe the resulting signals at other points. Checking all the possible interactions and finding potential pitfalls using simulation and testing techniques is not always possible. Formal verification [14], an appealing alternative to simulation and testing, conducts an exhaustive exploration of all possible behaviors of the system. Thus, when a design is marked correct by the formal method, it implies that all behaviors have been explored and the question of adequate coverage or a missed behavior becomes irrelevant. There are some robust tools for formal verification such as SMV, SPIN, COSPAN, VIS etc [14]. Our method has been modeled in SMV and the properties of the system have been verified formally.
II. DWT IN IMAGE COMPRESSION
Wavelet transform exploits both the spatial and frequency correlation of data by dilations (or contractions) and translations of mother wavelet on the input data. It supports the multi-resolution analysis of data i.e. it can be applied to different scales according to the details required, which allows progressive transmission and zooming of the image without the need of extra storage. Another encouraging feature of wavelet transform is its symmetric nature that is both the forward and the inverse transform has the same complexity, building fast compression and decompression routines. Its characteristics well suited for image compression include the ability to take into account of Human Visual System’s (HVS) characteristics, very good energy compaction capabilities, robustness under transmission, high compression ratio etc.
The implementation of wavelet compression scheme is very similar to that of subband coding scheme: the signal is decomposed using filter banks. The output of the filter banks is down-sampled, quantized, and encoded. The decoder decodes the coded representation, up-samples and recomposes the signal.
Wavelet transform divides the information of an image into approximation and detail sub-signals. The approximation sub-signal shows the general trend of pixel values and other three detail sub-signals show the vertical, horizontal and diagonal details or changes in the images. If these details are very small (threshold) then they can be set to zero without significantly changing the image. The greater the number of zeros the greater the compression ratio. If the energy retained (amount of information retained by an image after compression and decompression) is 100% then the compression is lossless as the image can be reconstructed exactly. This occurs when the threshold value is set to zero, meaning that the details have not been changed. If any value is changed then energy will be lost and thus lossy compression occurs. As more zeros are obtained, more energy is lost. Therefore, a balance between the two needs to be found out [15].
The primary aim of any compression method is generally to express an initial set of data using some smaller set of data either with or without loss of information. As for an example, let we have a function \( f(x) \) expressed as a weighted sum of basis function \( u_1(x), \ldots, u_m(x) \) as given below-
\[
f(x) = \sum_{i=1}^{m} c_i u_i(x)
\]
where \( c_1, \ldots, c_m \) are some coefficients. We here will try to find a function that will approximate \( f(x) \) with smaller coefficients, perhaps using different basis. That means we are looking for-
\[
\hat{f}(x) = \sum_{i=1}^{\hat{m}} \hat{c}_i \hat{u}_i(x)
\]
with a user-defined error tolerance \( \varepsilon \) (\( \varepsilon = 0 \) for lossless compression) such that \( m > \hat{m} \) and \( \| f(x) - \hat{f}(x) \| \leq \varepsilon \). In general, one could attempt to construct a set of basis functions \( \hat{u}_1, \ldots, \hat{u}_{\hat{m}} \) that would provide a good approximation in a fixed basis.
One form of the compression problem is to order the coefficients \( c_1, \ldots, c_m \) so that for \( m > \hat{m} \), the first \( \hat{m} \) elements of the sequence give the best approximation \( \hat{f}(x) \) to \( f(x) \) as measured in the \( L^2 \) form.
Let \( \pi(i) \) be a permutation of \( 1, \ldots, m \) and \( \hat{f}(x) \) be a function that uses the coefficients corresponding to the first \( \hat{m} \) numbers of the permutation \( \pi(i) \):
\[
\hat{f}(x) = \sum_{i=1}^{\hat{m}} c_{\pi(i)} u_{\pi(i)}
\]
The square of the \( L^2 \) error in this approximation is given by-
\[
\| f(x) - \hat{f}(x) \|_2^2 = \left( \sum_{i=1}^{m} c_{\pi(i)} u_{\pi(i)} \right)^2
\]
\[
= \sum_{i=1}^{m} \sum_{j=1}^{m} c_{\pi(i)} c_{\pi(j)} u_{\pi(i)} u_{\pi(j)}
\]
\[
= \sum_{i=1}^{m} \left( c_{\pi(i)} \right)^2
\]
Wavelet image compression using the \( L^2 \) norm can be summarized in the following ways:
i) Compute coefficients \( c_1, \ldots, c_m \) representing an image in a normalized two-dimensional Haar basis.
ii) Sort the coefficients in order of decreasing magnitude to produce the sequence $c_{x(1)}, \ldots, c_{x(m)}$.
iii) Given an allowable error $\varepsilon$ and starting from $\hat{m} = m$, find the smallest $\hat{m}$ for which
$$\sum_{i=\hat{m}+1}^{m} (c_{x(i)})^2 \leq \varepsilon^2$$
The first step is accomplished by applying either of the 2D Haar wavelet transforms being sure to use normalized basis functions. Any standard sorting method will work for the second step and any standard search technique can be used for third step. However, for large images sorting becomes exceedingly slow. The procedure below outlines a more efficient method of accomplishing steps 2 and 3, which uses a binary search strategy to find a threshold $\tau$ below which coefficients can be truncated.
The procedure takes as input a 1D array of coefficients $c$ (with each coefficient corresponding to a 2D basis function) and an error tolerance. For each guess at a threshold $\tau$ the algorithm computes the square of the $L^2$ error that would result from discarding coefficients smaller in magnitude than $\tau$. This squared error $s$ is compared to $\varepsilon^2$ at each loop to decide if the search would continue in the upper or lower half of the current interval. The algorithm halts when the current interval is so narrow that the number of coefficients to be discarded no longer changes [16].
**procedure Compress (C : array [1..m] of reals; $\varepsilon$ : real)**
```plaintext
$\tau_{min} \leftarrow \min \{|c[i]|\}$
$\tau_{max} \leftarrow \max \{|c[i]|\}$
do
$\tau \leftarrow (\tau_{min} + \tau_{max})/2$
$s \leftarrow 0$
for $i \leftarrow 1$ to $m$
do $if |C[i]| < \tau$ then $s \leftarrow s + |C[i]|^2$
end if
$if s < \varepsilon^2$ then $\tau_{min} \leftarrow \tau$
$\tau_{max} \leftarrow \tau$
until $\tau_{min} = \tau_{max}$
for $i \leftarrow 1$ to $m$
do $if |C[i]| < \tau$ then $C[i] \leftarrow 0$
end if
end for
end procedure
```
The below pseudocode fragment for a greedy $L^1$ compression scheme, which works by accumulating in a 2D array $\Delta_{x,y}$ the error introduced by discarding a coefficient and checking if this error has exceeded a user-defined threshold.
**for each pixel (x,y) do**
$$\Delta_{x,y} \leftarrow 0$$
**end for**
for $i \leftarrow 1$ to $m$
do $\Delta \leftarrow \Delta + \text{error from discarding } c[i]$
$if \sum_{x,y} |\Delta_{x,y}| < \varepsilon$ then
$$c[i] \leftarrow 0$$
$$\Delta \leftarrow \Delta'$$
**end if**
**end for**
To understand how wavelets work, let us start with a simple example. Assume we have a 1D image with a resolution of four pixels, having values [9 7 3 5]. Haar wavelet basis can be used to represent this image by computing a wavelet transform. To do this, first average the pixels together, pairwise, is calculated to get the new lower resolution image with pixel values [8 4]. Clearly, some information is lost in this averaging process. We need to store some detail coefficients to recover the original four pixel values from the two averaged values. In our example, 1 is chosen for the first detail coefficient, since the average computed is 1 less than 9 and 1 more than 7. This single number is used to recover the first two pixels of our original four-pixel image. Similarly, the second detail coefficient is -1, since 4 + (-1) = 3 and 4 - (-1) = 5. Thus, the original image is decomposed into a lower resolution (two-pixel) version and a pair of detail coefficients. Repeating this process recursively on the averages gives the full decomposition shown in Table I:
<table>
<thead>
<tr>
<th>Resolution</th>
<th>Averages</th>
<th>Detail Coefficients</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>[9 7 3 5]</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>[8 4]</td>
<td>[-1]</td>
</tr>
<tr>
<td>1</td>
<td>[6]</td>
<td>[2]</td>
</tr>
</tbody>
</table>
Thus, for the one-dimensional Haar basis, the wavelet transform of the original four-pixel image is given by [6 2 1 -1]. The way used to compute the wavelet transform by recursively averaging and differencing coefficients, is called a *filter bank*. We can reconstruct the image to any resolution by recursively adding and subtracting the detail coefficients from the lower resolution versions.
It has been shown how one dimensional image can be treated as sequences of coefficients. Alternatively, we can think of images as piecewise constant functions on the half-open interval [0, 1). To do so, the concept of a *vector space* is used. A one-pixel image is just a function that is constant over the entire interval [0, 1). Let $V^P$ be the vector space of all these functions. A two-pixel image has two constant pieces over the intervals [0, 1/2] and [1/2, 1). We call the space containing all these functions $V^P$. If we continue in this manner, the space $V^P$ will include all piecewise-constant functions defined on the interval [0, 1) with constant pieces over each of 2^p equal subintervals. We can now think of every one-dimensional image with 2^p pixels as an element, or vector, in $V^P$. Note that because these vectors are all functions defined on the unit interval, every vector in $V^P$ is also contained in $V^{p+1}$. For example, we can always describe a piecewise constant function with two intervals as a piecewise-constant function with four intervals, with each interval in the first function corresponding to a pair of intervals in the second. Thus, the spaces $V^P$ are nested; that is, $V^{p} \subset V^{p'} \subset V^{p''} \subset \ldots$. This nested set of spaces $V^P$ is a necessary ingredient for the mathematical theory of multiresolution analysis [16]. It guarantees that...
every member of \( V^0 \) can be represented exactly as a member of higher resolution space \( V^1 \). The converse, however, is not true: not every function \( G(x) \) in \( V^1 \) can be represented exactly in lower resolution space \( V^0 \); in general there is some lost detail [17].
Now we define a basis for each vector space \( V^i \). The basis functions for the spaces \( V^i \) are called scaling functions, and are usually denoted by the symbol \( \phi \). A simple basis for \( V^1 \) is given by the set of scaled and translated box functions [18]:
\[
\phi(x) = \begin{cases}
1 & \text{for } 0 \leq x < 1 \\
0 & \text{otherwise}
\end{cases}
\]
The wavelets corresponding to the box basis are known as the Haar wavelets, given by:
\[
\Psi_i(x) = \Psi(2^i x - i) \quad i = 0, 1, 2, \ldots \end{array}
\]
\[
\Psi(x) = \begin{cases}
1 & \text{for } 0 \leq x < 1/2 \\
-1 & \text{for } 1/2 \leq x < 1 \\
0 & \text{otherwise}
\end{cases}
\]
Thus, the DWT for an image as a 2D signal will be obtained from 1D DWT. We get the scaling function and wavelet function for 2D by multiplying two 1D functions. The scaling function is obtained by multiplying two 1D scaling functions: \( \phi(x,y) = \phi(x) \phi(y) \). The wavelet functions are obtained by multiplying two wavelet functions or wavelet and scaling function for 1D. For the 2D case, there exist three wavelet functions that scan details in horizontal \( \Psi^{(1)}(x,y) = \phi(x) \Psi(y) \), vertical \( \Psi^{(2)}(x,y) = \Psi(x) \phi(y) \) and diagonal directions: \( \Psi^{(3)}(x,y) = \Psi(x) \Psi(y) \). This may be represented as a four channel perfect reconstruction filter bank as shown in Fig. 1. Now, each filter is 2D with the subscript indicating the type of filter (HPF or LPF) for separable horizontal and vertical components. By using these filters in one stage, an image is decomposed into four bands. There exist three types of detail images for each resolution: horizontal (HL), vertical (LH), and diagonal (HH). The operations can be repeated on the low low (LL) band using the second stage of identical filter bank. Thus, a typical 2D DWT, used in image compression, generates the hierarchical structure shown in Fig. 2.

**Fig. 1 One Filter Stage in 2D DWT**
<table>
<thead>
<tr>
<th>LL</th>
<th>HL1</th>
</tr>
</thead>
<tbody>
<tr>
<td>LH1</td>
<td>HH1</td>
</tr>
<tr>
<td>HL2</td>
<td></td>
</tr>
<tr>
<td>LH3</td>
<td>HH3</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>aL</th>
<th>aH</th>
</tr>
</thead>
<tbody>
<tr>
<td>aL</td>
<td>aH</td>
</tr>
</tbody>
</table>
**Fig. 2 Structure of wavelet decomposition**
The transformation of the 2D image is a 2D generalization of the 1D wavelet transformed already discussed. It applies the 1D wavelet transform to each row of pixel values. This operation provides us an average value along with detail coefficients for each row. Next, these transformed rows are treated as if they were themselves an image and apply the 1D transform to each column. The resulting values are all detail coefficients except a single overall average co-efficient. In order to complete the transformation, this process is repeated recursively only on the quadrant containing averages.
Now let us see how the 2D Haar wavelet transformation is performed. The image is comprised of pixels represented by numbers [19]. Consider the 8×8 image taken from a specific portion of a typical image shown in Fig. 3. The matrix (a 2D array) representing this image is shown in Fig. 4.
Now we perform the operation of averaging and differencing to arrive at a new matrix representing the same image in a more concise manner. Let us look how the operation is done. Consider the first row of the Fig. 4.
Averaging: \((64+2)/2=33, (6+61)/2=32, (60+6)/2=33, (7+57)/2=32\)
Differencing: \(-33=31, 3-32=-29, 60-33=27\) and \(-25\)
**Fig. 3 A 8×8 image**
**Fig. 4 2D array representing the Fig. 3**
So, the transformed row becomes \((33 32 33 32 -29 27 -25)\). Now the same operation on the average values i.e. \((33 32 33 32)\) is performed. Then we perform the same operation on the averages i.e. first two elements of the new transformed...
The point of the wavelet transform is that regions of little variation in the original image manifest themselves as small or zero elements in the wavelet transformed version. The 0's in the Fig. 6 are due to the occurrences of identical adjacent elements in the original matrix. A matrix with a high proportion of zero entries is said to be sparse. For most of the image matrices, their corresponding wavelet transformed versions are much sparser than the originals. Very sparse matrices are easier to store and transmit than ordinary matrices of the same size. This is because the sparse matrices can be specified in the data file solely in terms of locations and values of their non-zero entries.
It can be seen that in the final transformed matrix, we find a lot of entries zero. From this transformed matrix, the original matrix can be easily calculated just by the reverse operation of averaging and differencing i.e. the original image can be reconstructed from the transformed image without the loss of information. Thus, it yields a lossless compression of the image. However, to achieve more degree of compression, we have to think of the lossy compression. In this case, a nonnegative threshold value say $\varepsilon$ is set. Then, any detailed coefficient in the transformed data, whose magnitude is less than or equal to $\varepsilon$ is set to zero. It will increase the number of 0’s in the transformed matrix and thus the level of compression is increased. So, $\varepsilon=0$ is used for a lossless compression. If the lossy compression is used, the approximations of the original image can be built up. The setting of the threshold value is very important as there is a tradeoff between the value of $\varepsilon$ and the quality of the compressed image. Finding out an appropriate value of $\varepsilon$ is an interesting area to research on. Loosely saying, the compression ratio of the image is calculated by: the number of nonzero elements in original matrix: the number of nonzero elements in updated transformed matrix [20].
```
/*row transformation*/
for(i=0;i<row;i++) {w=col;
do{ k=0;
/*averaging*/ for(j=0;j<w/2;j++,k++)
a[j]=((mat[j+j][i]+mat[j+j+1][i])/2);
/*differencing*/ for(j=w/2;j<w;j++,k++)
a[j]=mat[j][i]-mat[j][i+w/2+k]-a[k];
w=w/2;
}while(w!=1);
for(j=0;j<row;j++) mat[i][j]=a[j];
}
/*column transformation*/
for(i=0;i<col;i++) {w=row;
do{k=0;
/*averaging*/ for(j=0;j<w/2;j++,k++)
a[j]=((mat[i][i+j]+mat[i][i+j+1])/2);
/*differencing*/ for(j=w/2;j<w;j++,k++)
a[j]=mat[i][i]-mat[i][i+w/2+k]-a[k];
w=w/2;
}while(w!=1);
}
```
In summary, the main steps of the 2D image compression using wavelet as the basis functions are: (a) Start with the matrix $P$ representing the original image, (b) Compute the transformed matrix $T$ by the operation averaging and differencing (First for each row, then for each column) (c) Choose threshold value $\varepsilon$ ($\varepsilon=0$ for lossless and $\varepsilon=$ some $\varepsilon$ for lossy) (d) Replace all co-efficient of $T$ which is smaller than or equal to $\varepsilon$ by zero. Suppose this matrix is $D$. (e) Use $D$ to compute the compression ratio and to reconstruct the original image as well.
Now we see the effect of one step averaging and differencing of an image. The Fig. 8 is the original image and the Fig. 9 is the transformed image after applying the one step averaging and differencing. The more steps produce more decomposition.
III. CONCURRENT PROGRAM TESTING
A concurrent program consists of a collection of sequential processes whose execution is interleaved; the interleaving is the result of choices made by a scheduler. Lots of execution interleave possibilities are possible, making testing of all but trivial concurrent programs infeasible.
To make matters worse, functional specifications for concurrent programs often concern intermediate steps of the computation. For example, consider a word-processing program with two processes: one that formats pages and passes them through a queue to the second process, which controls a printer. The functional specification might stipulate that the page-formatter process never deposit a page image into a queue slot that is full and that the printer-control process never retrieve the contents of an empty or partially filled queue slot.
If contemplating the individual execution interleave possibilities of a concurrent program is infeasible, then we must seek methods that allow all executions to be analyzed together. We do so on hand a succinct description of the entire set of executions: the program text itself. Thus, analysis methods that work directly on the program text (rather than on the executions it encodes) have the potential to circumvent problems that limit the effectiveness of testing. For example, here is a rule for showing that some bad thing doesn’t happen during execution:
Identify a relation between the program variable that is true initially and is left true by each action of the program. Show that this relation implies the "bad thing" is impossible.
Thus, to show that the printer-control process in the previous example never reads the contents of a partially filled queue slot (a bad thing), we might see that the shared queue is implemented in terms of two variables:
NextFull points to the queue slot that has been full the longest and is the one the printer-control process will next read.
FirstEmpty points to the queue slot that has been empty the longest and is the one where the page-formatter process will next deposit a page image.
We would then establish that NextFull ≠ FirstEmpty is true initially and that no action of either process falsifies it. And, from the variable definitions, we would note that NextFull ≠ FirstEmpty implies that the printer-control process reads the contents of a different queue slot than the page-formatter process writes, so the "bad thing" cannot occur.
It turns out that all functional specifications for concurrent programs can be partitioned into bad things and good things. Thus, a rule for such good things will complete the picture. To show that some good thing does happen during execution: Identify an expression involving the program variables that when equal to some minimal value implies that the "good thing" has happened. Show that this expression (a) is decreased by some program actions that must eventually run, and (b) is not increased by any other program action.
Note our rules for bad things and good things do not require checking individual process interleavings. They require only effort proportional to the size of the program being analyzed. Even the size of a large program need not be an impediment—large concurrent programs are often just small algorithms in disguise. Such small concurrent algorithms can be programmed and analyzed; we build a model and analyze it to gain insight about the full-scale artifact [21].
Thus, the correct sequencing of the interactions or communications between different tasks, and the coordination of access to resources that are shared between tasks, are key concerns during the design of concurrent computing systems. That is why, writing correct concurrent programs is harder than writing sequential ones. This is because the set of potential risks and failure modes is larger - anything that can go wrong in a sequential program can also go wrong in a concurrent one, and with concurrency comes additional hazards not present in sequential programs such as race conditions, data races, deadlocks, missed signals etc.
In some concurrent computing systems communication between the concurrent components is hidden from the programmer, while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Shared Memory Communication: Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java). This style of concurrent programming usually requires the application of some form of locking (e.g. mutual exclusion) to coordinate between multiple threads.
Message Passing Communication: Concurrent components communicate by exchanging messages (exemplified by Erlang). The exchange of messages may be carried out asynchronously (sometimes referred to as "send and pray", although it is standard practice to resend messages that are not acknowledged as received), or may use a style in which the sender blocks until the message is received. Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust, although slower, form of concurrent programming. Testing concurrent programs is also harder than testing sequential ones. This is trivially true: tests for concurrent programs are themselves concurrent programs. But it is also true for another reason: the failure modes of concurrent programs are less predictable and repeatable than for sequential programs. Failures in sequential programs are deterministic; if a sequential program fails with a given set of inputs and initial state, it will fail every time. Failures in concurrent programs, on the other hand, tend to be rare probabilistic events.
Because of this, reproducing failures in concurrent programs can be difficult. Not only might the failure be rare, and therefore not manifest itself frequently, but it might not occur at all in certain platform configurations, so that bug that happens daily at customer's site might never happen at all in test lab. Further, attempts to debug or monitor the program can introduce timing or synchronization artifacts that prevent the bug from appearing at all. As in Heisenberg's uncertainty principle, observing the state of the system may in fact change it.
IV. THE CONCURRENT COMPRESSION SYSTEM
This section describes how the image is concurrently transformed, the problem lies in the concurrent transformation, the model of the system, the model verification and the result obtained.
A. Concurrent Transformation of Image
We know that wavelet transformation entails transformation of image data horizontally first and then vertically. Here we divide the image plane into \( n \) horizontal sections which are horizontally transformed concurrently. After then the image is divided into \( n \) vertical sections which are then vertically transformed concurrently. It is not a must that the number of horizontal sections is equal to the number of vertical sections. Fig. 10 below illustrates the method.
But the problem lies in the concurrency. The system just proposed lets the possibility for vertical transformation to begin on some vertical sections before horizontal transformation in all sections is completed. Vertical sections that are already horizontally transformed can be vertically transformed as illustrated in Fig. 11. That allows the possibility for threads that completed horizontal transformation to go on to vertical transformation without having to wait on other threads to complete horizontal transformation. The gray color indicates sections of image data that are horizontally transformed. The white color indicates sections of image data that are not yet horizontally transformed. The gray vertical section with line stripes can be assigned to a thread for vertical transformation. Before a vertical section is available for transformation, one condition that must be met is that all horizontal sections transform \( n \) size data horizontally such that an \( n \) wide vertical section is available with all data points already horizontally transformed.
The assertion for the verification is that at any time, the vertical transformation does not start on a vertical section that is not horizontally transformed. In Fig. 12, the vertical transformation can start in vertical sections \( V_6 \) and \( V_7 \) but not in \( V_3 \) through \( V_7 \).
B. Model Verification
To understand the term “model”, we need to be familiar with transition system and Kripke Structure. A transition system is a structure \( TS = (S, S0, R) \) where, \( S \) is a finite set of states; \( S0 \subseteq S \) is the set of initial states and \( R \subseteq S \times S \) is a transition relation which must be total i.e. for every \( s \) in \( S \) there exists \( s' \) in \( S \) such that \( (s, s') \) is in \( R \). On the other hand, \( M = (S, S0, R, AP, L) \) is a Kripke Structure; where \( (S, S0, R) \) is a transition system. \( AP \) is a finite set of atomic propositions (each proposition corresponds to a variable in the model) and \( L \) is a labeling function. It labels each state with a set of atomic propositions that are true in that state. The atomic propositions and \( L \) together convert a transitions system into a model.
The foremost step to verify a system is to specify the properties that the system should have. For example, we may want to show that some concurrent program never deadlocks. These properties are represented by *temporal logic*. Computation Tree Logic (CTL) is one of the versions of *temporal logic*. It is currently one of the popular frameworks used in verifying properties of concurrent systems [22]. Once we know which properties are important, the second step is to construct a *formal model* for that system. The model should capture those properties that must be considered for the establishment of correctness. Model checking includes the traversing the state transition graph (*Kripke Structure*) and of verifying that if it satisfies the formula representing the property or not, more concisely, the system is a model of the property or not.
Each CTL formula is either true or false in a given state of the *Kripke Structure*. Its truth is evaluated from the truth of its sub-formulae in a recursive fashion, until one reaches atomic propositions that are either true or false in a given state. A formula is satisfied by a system if it is true for all the initial states of the system. Mathematically, say, a *Kripke Structure* \( K = (S, S0, R, AP, L) \) (system model) and a CTL formula \( \Psi \) (specification of the property) are given. We have to determine if \( K \models \Psi \) holds (\( K \) is a model of \( \Psi \)) or not. \( K \models \Psi \) holds iff \( K, s0 \models \Psi \) for every \( s0 \in S0 \). If the property does not hold, the model checker will produce a counter example that is an execution path that can not satisfy that formula.
Atomic propositions, standard boolean connectives of propositional logic (e.g., AND, OR, NOT), and temporal
operators all together are used to build the CTL formulae. Each temporal operator is composed of two parts: a path quantifier (universal (A) or existential (E)) followed by a temporal modality (F, G, X, U) and are interpreted relative to an implicit “current state”. There are generally many execution paths (the sequences) of state transitions of the system starting at the current state. The path quantifier indicates whether the modality defines a property that should be true of all those possible paths (denoted by universal path quantifier A) or whether the property needs only hold on some path (denoted by existential path quantifier E). The temporal modalities describe the ordering of events in time along an execution path and have the following meaning.
- **F \varnothing** (reads ‘\varnothing’ holds sometime in the future”) is true in a path if there exists a state in that path where formula ‘\varnothing’ is true.
- **G \varnothing** (reads ‘\varnothing’ holds globally”) is true in a path if ‘\varnothing’ is true at each and every state in that path.
- **X \varnothing** (reads ‘\varnothing’ holds in the next state”) is true in a path if ‘\varnothing’ is true in the state reached immediately after the current state in that path.
- **\varnothing U \varphi** (reads ‘\varnothing’ holds until ‘\varphi’ holds) is true in a path if ‘\varphi’ is true in some state in that path, and ‘\varnothing’ holds in all preceding states.
The semantics of the CTL operators are stated below:
- **K, s |= EX (Ψ)** there exists s’ such that s → s’ (R(s, s’)) and K, s’ |= Ψ. It means that s has a successor state s’ at which Ψ holds.
- **K, s |= EU (Ψ1, Ψ2) iff there exists a path L = s0, s1, … from s and k := 0 such that: K, L(k) |= Ψ2 and if 0 ≤ j < k, then K, L(j) |= Ψ1.**
- **K, s |= A U (Ψ1, Ψ2) iff for every path L = s0, s1, … from s there exists k >= 0 such that: K, L(k) |= Ψ2 and if 0 ≤ j < k, then K, L(j) |= Ψ1.**
- **AX (Ψ): It is not the case there exists a next state at which Ψ does not hold i.e. for every next state Ψ holds.**
- **EF (Ψ): There exists a path L from s and k >= 0 such that: K, L(k) |= Ψ.**
- **AG (Ψ): It is not the case there exists a path L from s and k>= 0 such that: K, L(k) |= Ψ i.e. for every path L from s and every k >= 0, K, L(k) |= Ψ**
- **AF(Ψ) : For every path L from s, there exists k>= 0 such that: K, L(k) |= Ψ.**
- **EG(Ψ): It is not the case that for every path L from s there is a k >= 0 such that K,L(k)=Ψ.** It means that there exists a path L from s such that, for every k>= 0: K, L(k) |= Ψ.
Some basic CTL operators among those stated above are shown graphically in Fig. 13. In this figure, if it is assumed that in the filled states, the formula f holds, then we can say that EF f, AF f, EG f, and AG f are satisfied in initial state.
CTL formulas are sometime problematical to interpret. For this, a designer may fail to understand what property has been actually verified. Here we want to add some common constructs of CTL formula used in hardware verification.
- **AG (Request → AF Acknowledgement): For all reachable states (AG), if Request is asserted in the state, then always at some later point (AF), we must reach a state where Acknowledgement is asserted.** AG is interpreted relative to the initial states of the system whereas AF is interpreted relative to the state where Request is asserted. A common mistake would be to write Request → AF Acknowledgement in place of AG (Request → AF Acknowledgement). The meaning of the former is that if Request is asserted in the initial state, then it is always the case that eventually we reach a state where Acknowledgement is asserted, while the latter requires that the condition is true for any reachable state where Request holds. If Request is identical true, AG (Request → AF Acknowledgement) reduces to AG AF Acknowledgement.
- **AG (AF DeviceEnabled): The proposition DeviceEnabled holds infinitely often on every computational path.**
- **AG (EF start): From any reachable state, there must exist a path starting at that state that reaches a state where start is asserted. In other words, it must always be possible to reach the restart state.**
- **EF (x ∧ EX (x ∧ EX y)) → EF (y ∧ EX EX z): It is possible for x to be asserted in three consecutive states, then it is also possible to reach a state where y is asserted and from there to reach in two more steps a state where z is asserted.**
- **EF (~Ready ∧ Started): It is possible to get to a state where holds started, but ready does not hold.**
- **AG (Send → A (Send U Receive)): It is always the case that if Send occurs, then eventually Receive is true, and until that time, Send must continue to be true.**
- **AG (in → AX AX AX out): Whenever in goes high, out will go high within three clock cycles.**
C. The Model
Fig. 14 shows the state diagram of the model for verification. It illustrates the tasks of a thread performing horizontal transformation on a horizontal section and vertical transformation on zero or more vertical sections.
In order to establish the communication between multiple threads, we require some work to set up and maintain the communication channels. There are several ways to communicate between threads, with some being more efficient than others.
One of the simplest ways to communicate state information between threads is to use a shared object or shared block of memory. A shared object requires very little setup—all we have to do is make sure each thread has a pointer to the object. The object contains whatever custom information we need to communicate between threads, so it should be very efficient.
The second option is the port-based communication. Ports offer a fast and reliable way to communicate between threads and processes on the same or different computers. Ports are also a fairly standard form of communication on many different platforms and their use is well established. In Mac OS X, a port implementation is provided by the Mach kernel. These Mach ports can be used to pass data between processes on the same computer.
The third way is the use of the message queues. The message queues offer an easy-to-use abstraction for thread communication. A message queue is a first-in, first-out (FIFO) queue that manages incoming and outgoing data for the thread. A thread can have both an input and an output queue. The input queue contains work the thread needs to perform, while the output queue contains the results of that work.
To establish communication between the threads, a reliable communication channel is required. Here, this communication channel is modeled as a queue of message, which is the integral part of the threads. The following figure 15 shows the modeling of the channel as a queue of message from thread 1 to thread 2. The message is pushed through the tail of the queue from the thread 1 side and the message is received from the head of the queue at the thread 2 side. The Fig. 16 shows the modeling of the communication channel as a queue for the message from thread 2 to thread 1. The message is sent from the thread 2 side and it is received at the head of the queue at the thread 1 side.
D. Verification of the Model
The specification for the proposed model verified by the SMV [23] is the \textit{SPEC AG (AU (hor_trans_count = Maxhor, bool_vert_trans))} and its result is true. It means that in all states of the transition system it is true that no vertical transformation gets started in any state of all the paths until the variable \textit{hor_trans_count} equals the maximum number of horizontal section i.e. Maxhor. This specification is the most
an important one that we must get true for the correct wavelet transformation required for the compression of the image.
V. CONCLUSION
In this paper, we’ve presented how the discrete wavelet transform is used to image compression, a model for the concurrent wavelet transformation for the compression of the large image, and more importantly the formal verification of the proposed model using the model checking tool SMV that automatically creates a formal environment to efficiently solve the design checking tasks. Some properties of the model have been verified. One of the important properties, in the context of the concurrent DWT transformation, is that at any time, the vertical transformation does not start on a vertical section that is not horizontally transformed which holds true in the model. Perhaps, this is the first time when the concurrent wavelet transformation for the image compression has been formally verified. One of the drawbacks of our modeling in SMV is the smaller size of the queue shared by different threads. In this respect, we hope to use other verification tool like SPIN in future.
REFERENCES
|
{"Source-Url": "http://waset.org/publications/5481/a-scheme-of-model-verification-of-the-concurrent-discrete-wavelet-transform-dwt-for-image-compression", "len_cl100k_base": 10162, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36558, "total-output-tokens": 11930, "length": "2e13", "weborganizer": {"__label__adult": 0.0004870891571044922, "__label__art_design": 0.0011224746704101562, "__label__crime_law": 0.0005712509155273438, "__label__education_jobs": 0.0007863044738769531, "__label__entertainment": 0.0001480579376220703, "__label__fashion_beauty": 0.0002288818359375, "__label__finance_business": 0.0003046989440917969, "__label__food_dining": 0.0005097389221191406, "__label__games": 0.0007762908935546875, "__label__hardware": 0.003215789794921875, "__label__health": 0.0009469985961914062, "__label__history": 0.00046443939208984375, "__label__home_hobbies": 0.00017464160919189453, "__label__industrial": 0.0009102821350097656, "__label__literature": 0.0004341602325439453, "__label__politics": 0.0003921985626220703, "__label__religion": 0.0007939338684082031, "__label__science_tech": 0.308349609375, "__label__social_life": 0.00011271238327026369, "__label__software": 0.00943756103515625, "__label__software_dev": 0.66845703125, "__label__sports_fitness": 0.0004162788391113281, "__label__transportation": 0.0008630752563476562, "__label__travel": 0.00026154518127441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46583, 0.02499]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46583, 0.69835]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46583, 0.8963]], "google_gemma-3-12b-it_contains_pii": [[0, 4740, false], [4740, 10352, null], [10352, 15959, null], [15959, 19923, null], [19923, 23470, null], [23470, 29770, null], [29770, 34561, null], [34561, 39373, null], [39373, 42211, null], [42211, 46583, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4740, true], [4740, 10352, null], [10352, 15959, null], [15959, 19923, null], [19923, 23470, null], [23470, 29770, null], [29770, 34561, null], [34561, 39373, null], [39373, 42211, null], [42211, 46583, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46583, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46583, null]], "pdf_page_numbers": [[0, 4740, 1], [4740, 10352, 2], [10352, 15959, 3], [15959, 19923, 4], [19923, 23470, 5], [23470, 29770, 6], [29770, 34561, 7], [34561, 39373, 8], [39373, 42211, 9], [42211, 46583, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46583, 0.05485]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
a6dbab3127c1e4289433c5caf1835350d2036a89
|
[REMOVED]
|
{"Source-Url": "https://cs.uni-paderborn.de/fileadmin/informatik/fg/is/Publications/ki17.pdf", "len_cl100k_base": 9135, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42838, "total-output-tokens": 11051, "length": "2e13", "weborganizer": {"__label__adult": 0.0003323554992675781, "__label__art_design": 0.0005249977111816406, "__label__crime_law": 0.000400543212890625, "__label__education_jobs": 0.000964641571044922, "__label__entertainment": 9.66787338256836e-05, "__label__fashion_beauty": 0.00020229816436767575, "__label__finance_business": 0.0004031658172607422, "__label__food_dining": 0.00035834312438964844, "__label__games": 0.0005745887756347656, "__label__hardware": 0.0008745193481445312, "__label__health": 0.0005841255187988281, "__label__history": 0.0003178119659423828, "__label__home_hobbies": 0.0001214742660522461, "__label__industrial": 0.0005340576171875, "__label__literature": 0.00034117698669433594, "__label__politics": 0.00035262107849121094, "__label__religion": 0.0004901885986328125, "__label__science_tech": 0.0926513671875, "__label__social_life": 0.00011652708053588869, "__label__software": 0.01273345947265625, "__label__software_dev": 0.8857421875, "__label__sports_fitness": 0.0002696514129638672, "__label__transportation": 0.0005598068237304688, "__label__travel": 0.0002040863037109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39949, 0.02837]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39949, 0.43035]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39949, 0.88006]], "google_gemma-3-12b-it_contains_pii": [[0, 2874, false], [2874, 5580, null], [5580, 9258, null], [9258, 12970, null], [12970, 15724, null], [15724, 18551, null], [18551, 21465, null], [21465, 24970, null], [24970, 28543, null], [28543, 30655, null], [30655, 33932, null], [33932, 36854, null], [36854, 39949, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2874, true], [2874, 5580, null], [5580, 9258, null], [9258, 12970, null], [12970, 15724, null], [15724, 18551, null], [18551, 21465, null], [21465, 24970, null], [24970, 28543, null], [28543, 30655, null], [30655, 33932, null], [33932, 36854, null], [36854, 39949, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39949, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39949, null]], "pdf_page_numbers": [[0, 2874, 1], [2874, 5580, 2], [5580, 9258, 3], [9258, 12970, 4], [12970, 15724, 5], [15724, 18551, 6], [18551, 21465, 7], [21465, 24970, 8], [24970, 28543, 9], [28543, 30655, 10], [30655, 33932, 11], [33932, 36854, 12], [36854, 39949, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39949, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ffd7cfd07832942659c9d9e297bbe9a482028ead
|
Cleaning up Erlang Code is a Dirty Job but Somebody’s Gotta Do It
Thanassis Avgerinos
School of Electrical and Computer Engineering,
National Technical University of Athens, Greece
ethan@softlab.ntua.gr
Konstantinos Sagonas
School of Electrical and Computer Engineering,
National Technical University of Athens, Greece
kostis@cs.ntua.gr
Abstract
This paper describes opportunities for automatically modernizing Erlang applications, cleaning them up, eliminating certain bad smells from their code and occasionally also improving their performance. In addition, we present concrete examples of code improvements and our experiences from using a software tool with these capabilities, tidier, on Erlang code bases of significant size.
Categories and Subject Descriptors D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement—Restructuring, reverse engineering, and reengineering
General Terms Design, Languages
Keywords program transformation, refactoring, code cleanup, code simplification, Erlang
1. Introduction
Most programmers write code. Good programmers write code that works. Very good programmers besides writing code that works also rewrite their code in order to simplify it, clean it, and make it more succinct, modern and elegant. While there will probably never be any real substitute for very good programmers, one might wonder whether there is some intrinsic reason why certain code rewriting tasks cannot be automated and become part of the development tool suite so that even good programmers can readily and effortlessly employ them on their code.
This question has been bothering us for quite some time now, in Erlang and elsewhere. Rather than just pondering it, we decided to embark on a project aiming to automate the modernization, clean up and simplification of Erlang programs. We started by standing on the shoulders of erl_tidy, a module of the syntax_tools application of Erlang/OTP written by Richard Carlsson, but as we will soon see we have significantly extended it in functionality, features and user-friendliness. The resulting tool is called tidier.
Tidier is a software tool that modernizes and cleans up Erlang code, eliminates certain bad smells from it, simplifies it and improves its performance. In contrast to other refactoring tools for Erlang, such as RefactorErl [9] and Wrangler [7], tidier is completely automatic and not tied to any particular editor or IDE. Instead, tidier comes with a suite of code refactorings that can be selected by its user via appropriate command-line options and applied in bulk on a set of modules or applications. This paper provides only a bird’s eye view of the transformations currently performed by tidier; a complete description of tidier and its capabilities is presented in a companion paper [11]. Instead, the main goal of this paper is to report our experiences from using tidier, shed light on some opportunities for code cleanups on existing Erlang source code out there and raise the awareness of the Erlang community on these issues.
The next section contains a brief presentation of tidier. The main section of this paper, Section 3, gives a capsule review for each refactoring currently performed by tidier and shows interesting code fragments we have encountered while trying out tidier on various open source Erlang applications. Section 4 presents tables showing the number of opportunities for tidier’s refactorings on several code bases of significant size and discusses tidier’s effectiveness. Section 5 presents characteristics of Erlang code that currently prevent tidier from performing more aggressive refactorings, while at the same time preserving its main characteristics, and discusses planned future improvements. The paper ends with some concluding remarks.
2. Tidier: Characteristics and Overview
From the beginning we set a number of primary goals for tidier:
- Tidier should support a fully automatic mode, meaning that all the refactorings should be such that they can applied on programs without user confirmation.
- Tidier should be flexible. Users should be able to decide about the set of refactorings that they want from tidier and, if they choose so, supervise or even control the refactorings that are performed.
- Tidier should never be wrong. Due to its fully automatic nature, tidier should perform a refactoring only if it is absolutely certain that the transformations performed are semantically-preserving, even if this comes at the cost of missing some opportunity or performing some weaker but safer refactoring.
- Tidier’s refactorings should be natural and as good as they get. The resulting code should, up to a certain extent, resemble the code that experienced Erlang programmers would have written if they performed these refactorings by hand.
- Tidier should be easy to use and not be bound to any particular editor or IDE.
- Tidier should be fast. So fast that it can be included in the typical make cycle of applications without imposing any significant overhead; ideally, an overhead that is hard to notice.
Furthermore, we set a list of criteria that would serve as indicators of whether a specific refactoring should be performed by tidier. The transformations should result in code
- modernizations: tidier should remove obsolete language constructs and use the most modern construct for the job.
- simplifications: The resulting code should be shorter, simpler and therefore more elegant.
- with fewer redundancies: The resulting code should contain fewer redundancies than the original version.
- with the same or better performance: The new code should not deteriorate in performance and if possible become even faster.
**Modes of using tidier** One of our goals has been that tidier should be very easy to use. Indeed, the simplest way to use tidier on some Erlang file is via the command:
```bash
> tidier myfile.erl
```
If all goes well, this command will automatically refactor the code of `myfile.erl` and overwrite the contents of the file with the resulting source code (also leaving a backup file in the same directory). Multiple source files can also be given. Alternatively, the user can tidy a whole set of applications by a command of the form:
```bash
> tidier applic1/src ... applicN/src
```
which will tidy all `.erl` files somewhere under these directories. Both of these commands will apply the default set of transformations on all files. If only some of the transformations are desired, the user can select them via appropriate command-line options. For example, one can issue the command:
```bash
> tidier --guards --case-simplify myfile.erl
```
to only rewrite guards to their modern counterparts (Section 3.1) and simplify all case expressions (Section 3.3) of ` myfile.erl`. We refer the reader to tidier’s manual for the complete and up-to-date set of command-line options.
A very handy option is the `-n` (or `--no-transform`) option that will cause tidier to just print on the standard output the list of transformations that would be performed on these files, together with their lines, without performing them. Alternatively the user can use the `-g` (or `--gui`) option to invoke tidier’s GUI and perform refactoring interactively. We expect that novice tidier users will probably prefer this mode of using tidier, at least initially.
Let us examine tidier’s GUI. Figure 1 shows tidier in action. In fact, the snapshot depicts tidier refactoring a file from the `inviso` application of Erlang/OTP R13B. Tidier has identified some code as a candidate for simplification and shows the final version of this code to its user. What the snapshot does not show is that the simplification involves three different refactorings and that tidier has previously shown all these refactorings, one after the other, to its user. At the point when the snapshot is taken, Tidier’s GUI shows the old code (on the left) and the new code (on the right); the code parts that differ between the two versions are coloured appropriately (with red color the old excerpt of the code and with green the new). At this point, the user can either press the “Use suggested version” button to accept tidier’s transformation or the “Keep original version” button to bypass it. In either case, tidier will continue with the next refactoring or exit if this is the last one.
As a side comment, at some point during tidier’s development we were thinking of giving the user the possibility to edit the code on the right (i.e., allowing the user to fine-tune tidier’s refactorings), but we have given up on this idea as it requires dealing with too many issues which are peripheral to the main goals of tidier (e.g., how should tidier continue if the user inputs code which is syntactically erroneous, should there be an “undo” option, etc.). The user can and should better use an editor for such purposes.
### 3. Transformations Performed by Tidier
Let us now see the transformations that tidier performs.
#### 3.1 Transformations inherited from erl_tidy
Some of tidier’s transformations were inherited from the `erl_tidy` module of Erlang/OTP’s `syntax_tools` application. They are all quite simple but, since they are part of tidier and the basis for our work, we briefly describe them here.
Modernizing guards
For many years now, the Erlang/OTP system has been supporting two sets of type checking functions: old-style (atom/1, integer/1, . . .) and new-style ones (is_atom/1, is_binary/1, is_integer/1, . . .). All this time, the implicit recommendation has been that applications should gradually convert to using new-style guards, but not all applications have done so. Those that have not recently got one more incentive to do so: the compiler of the R13B release of Erlang/OTP stopped being silent about uses of old-style guards and generates warnings by default.
Note that the modernization of guards is both a rather tedious job for programmers and a task that cannot be automated easily. For example, it cannot be performed by a global search and replace without the programmer’s full attention or by a simple sed-like script that does not understand what is a guard position in Erlang. Consider the following Erlang code which, although artificial and of really poor code quality, is syntactically valid. It is probably not immediately obvious to the human eye where the guard is.
```erlang
-module(where_is_the_integer_guard).
-export([obfuscated_integer_test/1]).
obfuscated_integer_test(X) ->
integer(X) =:= integer.
integer(X) -> integer;
integer(_) -> not_an_integer.
```
In contrast, for an automated refactoring tool like tidier, which understands Erlang syntax, the modernization of guards is a simple and straightforward task.
Eliminating explicit imports
This transformation eliminates all import statements and rewrites all calls to explicitly imported functions as remote calls as shown:
```erlang
-t(X) ->
case m1:foo(X) of
m2:bar(A, B),
... ->
```
Admittedly, to a large extent the eliminating imports refactoring is a matter of taste. Its primary goal is not to make the code shorter but to improve its readability and understandability by making clear to the eye which calls are calls to module-local functions and which are remote calls. In addition, in large code bases, it makes easier to find (e.g. using tools like Unix’s grep) all calls to a specific m:f function of interest. Of course, it is possible to do the above even in files with explicit imports, but it is often more difficult.
Eliminating appends and subtracts
This is a very simple refactoring that substitutes all occurrences of calls to lists:append/2 and lists:subtract/2 functions with the much shorter equivalent operators ++ and --. The main purpose of this refactoring is to reduce the size of source code but also improve readability.
Transforming maps and filters to list comprehensions
This is a modernization refactoring that involves the transformation of lists:map/2 and lists:filter/2 to an equivalent list comprehension. The goals of this transformation are threefold: (a) reduce the source code size; (b) express the mapping or filtering of a list in a more elegant way and (c) increase the opportunities for further refactorings that involve list comprehensions as we will see.
Transforming fun expressions to functions
This is the Erlang analogue of the extract method refactoring in object oriented languages [5]. This particular refactoring removes fun expressions from functions and transforms them into module local functions. This transformation primarily aims at improving code readability but can also be used for detecting opportunities for clone removal as also noted by the developers of Wrangler [6].
3.2 Simple transformations
From this point on, all transformations we present are not performed by erl.tidy. We start with the simple ones.
Transforming coercing to exact equalities and inequations
In the beginning, the Erlang Creator was of the opinion that the only reasonable numbers were arbitrary precision integers and consequently one equality (=) and one inequation (=/=) symbol were sufficient for comparing between different numbers. At a later point, it was realized that some programming tasks occasionally also need to manipulate floating point numbers and consequently Erlang was enriched by them. Most probably, because C programmers were accustomed to == having coercing semantics for numbers, comparison operators for exact equality (=:=) and inequation (=/=) were added to the language. These operators perform matching between numbers. Up to this point all is fine. The problem is that in 99% of all numeric comparisons, Erlang programmers want matching expressions from functions and transforms them into module local functions.
```erlang
case lists:keyfind(Child#child.name, Pos, Res) of
{value, _} -> {duplicate_child, Child#child.name};
_ -> check_startspec(T, [Child#child.name])
end
```
To preserve the semantics, this code should be changed to:
```erlang
case lists:keywordfind(Child#child.name, Pos, Res) of
{false} -> check_startspec(T, [Child#child.name]);
_ -> {duplicate_child, Child#child.name}
end
```
and indeed this is the transformation that tidier performs, based on type information about the return values of the two functions. Moreover, notice that there are calls to lists:keysearch/3 that cannot be changed to lists:keyfind/3. One of them, where the matching is used as an assertion, is shown below:
```erlang
{value, _} = lists:keysearch(delete, 1, Query),
```
This particular transformation involving lists:keysearch/3 is just one member of a wider set of similar function modernizations that are currently performed by tidier. Their purpose is to assist programmers with software maintenance and upgrades. Judging from the number of obsolete function warnings we have witnessed remaining unchanged across different releases, both in Erlang/OTP and elsewhere, it seems that in practice updating deprecated functions is a very tedious task for Erlang programmers to perform manually.
**Record transformations**
The record transformations referring to a series of record-related transformations that are performed by tidier. Detailed examples can be found in the companion paper [11] but briefly the refactoring consists of three main transformation steps: (i) converting is_record/2,[2,3] guards to clause matchings; (ii) generating fresh variables for the record fields that are used in the clause and matching them with the corresponding fields in the clause pattern; (iii) replacing record accesses in the clause body with the new field variables. Record transformations lead to shorter and cleaner code, improve code readability, may trigger further refactorings, and when applied en masse they can even improve performance.
### 3.3 Transformations that eliminate redundancy
Various refactorings specialize the code and remove redundancies.
**Specializing the size/1 function**
Tidier employs this refactoring to find opportunities to specialize the size/1 function. Since Erlang/OTP R12 there exist two new BIFs that return the size of tuples (tuple_size/1) and binaries (byte_size/1). By performing a local type analysis, tidier automatically performs this substitution whenever possible. Such a refactoring has a lot of benefits: (i) modernizes the code; (ii) makes the programmers’ intentions about types clear rather than implicit; (iii) assists bug detection tools like Dialyzer [5] to detect type clashes with less effort; (iv) slightly improves the performance of programs; and (v) often triggers further simplifications.
**Simplifying guard sequences**
This refactoring removes redundant guards and simplifies guard sequences. Some examples are shown below (the when is not shown) where we have taken the liberty to combine guard simplifications with some other refactorings we have previously introduced.
\[
\begin{align*}
\text{is_list(L), length(L) > 42} & \quad \Rightarrow \quad \text{length(L) > 42} \\
\text{is_integer(N), N == 42} & \quad \Rightarrow \quad N :=: 42 \\
\text{is_tuple(L), size(T) < 42} & \quad \Rightarrow \quad \text{tuple_size(T) < 42}
\end{align*}
\]
Such refactorings reduce the code size (both source and object) and also improve performance.
**Structure reuse**
The structure reuse refactoring is quite similar to (and inspired from) transformations that optimizing compilers perform. Identical structures (tuples or lists) in the same clause containing fully evaluated terms (i.e., not calls) as subterms are identified by tidier and their first occurrences are assigned to fresh variables. When the identification phase is over, tidier simply replaces all subsequent occurrences of the identical structures with the new variables. This refactoring reduces the code size and also improves performance.
**Straightening case expressions**
We use the term straightening to describe the refactoring of a case expression to a matching statement. Such a refactoring can only be applied when the case expression has only one alternative clause. Tidier identifies those cases and performs this transformation provided that the body of the case does not contain any comments (presumably commented-out alternative case clauses or some message that the treatment in the case body is currently incomplete).
**Temporary variable elimination**
This is another refactoring inspired from compiler optimizations, namely from copy propagation. Temporarily storing an intermediate result in a variable to be used in the immediately following expression is actual commonplace in almost all programming languages. Tidier, by performing this refactoring, eliminates the temporary variable and replaces it with its value. This transformation, combined with the straightening refactoring of the previous paragraph can lead to significant simplifications. For example, consider the following fragment from the development version of Ejabberd’s source code (file src/ ejabberd.c2s.erl:1951, with one variable renamed so that the code fits here):
```erlang
get_statustag(P) ->
case xml:get_path_s(P, [{elem, "status"}, cdata]) of
ShowTag -> ShowTag
end.
```
by straightening the case expression and eliminating the temporary variable the code will be transformed by tidier to:
```erlang
get_statustag(P) ->
xml:get_path_s(P, [{elem, "status"}, cdata]).
```
However, if tidier applied this refactoring aggressively, we would end up with code ‘simplifications’ that would look completely unnatural and most probably would never be performed by a programmer. An example of unwanted behaviour from this refactoring is illustrated below:
```erlang
get_results(BitStr) ->
Tokens = get_tokens(BitStr),
ServerInfo = get_server_info(Tokens),
process_data(ServerInfo).
```
get_results(BitStr) ->
```
process_data(process_server_info(get_tokens(BitStr))).
```
Since only few Erlang programmers would consider the resulting code an improvement over the original one as far as code readability is concerned, tidier does not perform such refactorings. Instead, tidier performs the temporary variable elimination refactoring when:
- The variable that was used to store the temporary result is eventually used to return the result of a clause (as in the first example we saw).
- It is determined that such a refactoring can lead to further and more radical refactorings later on (such as the ones we will present in Section 3.4). In this case, to ensure that such refactorings are possible after the transformation, tidier has to perform a speculative analysis about the result of further refactorings after this transformation.
Simplifying expressions
While reviewing Erlang code fragments, we have come across a conglomeration of expression simplifications that could be achieved just by applying some simple transformations. Specifically, a very frequent case involved the simplification of boolean case and if expressions.
As an actual such example, the first transformation of Figure 2 shows the simplification of source code from Erlang Web (file wparts-1.2.1/src/wtype_time.erl:177). In this case the code will be simplified further by tidier when the is_between/3 guard Erlang Enhancement Proposal [10] is accepted and by unfolding the lists:all/2 call as shown in the second transformation of the same figure. This last step is not done yet.
```erlang
is_valid_time([H1, H2, H3]) ->
Hour = if (H1 >= 0) and (H1 < 24) -> true;
true -> false
end,
Minute = if (H2 >= 0) and (H2 < 60) -> true;
true -> false
end,
Sec = if (H3 >= 0) and (H3 < 60) -> true;
true -> false
end,
lists:all(fun(X) -> X == true end, [Hour, Minute, Sec]).
```
One more case where it is possible to do a transformation similar to the above is when the fun is used in lists:filter/2 and defines a total boolean function (i.e., a function that does not impose any constraints on its argument) as the code below (from Erlang/OTP R13B's lib/kernel/src/pg2.erl:278):
```erlang
del_node_members([[Name, Pids] | T], Node) ->
NewMembers =
lists:filter(fun(Pid) when node(Pid) =:= Node -> false;
(_) -> true
end, Pids),
...
```
which tidier automatically transforms to:
```erlang
del_node_members([[Name, Pids] | T], Node) ->
NewMembers = [Pid || Pid <- Pids, node(Pid) =/= Node],
...
```
Deforestation in map+filter combinations
Some nested calls to lists:map/2 and lists:filter/2 are transformed to a single list comprehension by tidier, thus eliminating the intermediate list and effectively performing deforestation [13] at the source code level. (The companion paper [11] contains an interesting such example.) Whenever the calls to map and filter are not nested, tidier performs a speculative analysis employing the temporary result elimination refactoring from Section 3.3 to see if this can create further opportunities for deforestation. In either case, tidier will perform the deforestation only in cases it is certain that doing so will not alter the exception behaviour of the code (e.g., miss some exception that the original code generates). We will come back to this point in Section 5.
Zipping and unzipping
In general, type information (hard-coded or automatically inferred through analysis) can radically improve the resulting refactorings. For example, tidier has hard-coded information that the result of lists:zip/2 is a list of pairs. This allows tidier to perform function inlining in cases that it would not have been possible without such information. It also prepares tidier for the possibility that comprehension multigenerators become part of the language.
Since tidier is treating calls to lists:zip/2 specially, it felt natural that calls to lists:unzip/1 would also receive special treatment. One very interesting case appears in the source code of disco-0.2/master/src/event_server.erl:123. We show tidier performing a non-trivial code transformation including this refactoring in Figure 3.
3.4 Simplification of list comprehensions
Although the list comprehension transformations that were inherited from erl_tidy are semantically correct, at times, the resulting code was not what an expert Erlang programmer would have written if she were transforming the code by hand. The refactorings in this section describe a series of transformations that are supported by tidier in order to improve the quality of the list comprehensions that are produced and at the same time simplify them even more by using the refactorings that were presented in the previous sections.
Fun to direct call
This is a very simple transformation. It is typically performed in conjunction with the refactoring that transforms a fun expression to a local function (Section 3.4), and transforms the application of a function variable to some arguments to a direct call to the local function with the same argument.
Inlining simple and boolean filtering funs
A simple fun within a lists:map/2 or lists:filter/2 which is defined by a match all clause without guards can be inlined when the map or filter call is transformed to a list comprehension. This simplifies the resulting code and simultaneously makes it more appealing and natural to the programmer’s eye. We illustrate it:
```erlang
lists:filter(fun(X) -> is_gazonk(X) end, L)
```
```erlang
[X || X <- L, is_gazonk(X)]
```
Figure 2. A case of multiple if simplifications.
3.5 Transformations that reduce the complexity of programs
One of the blessings of high-level languages such as Erlang is that they allow programmers to write code for certain programming tasks with extreme ease. Unfortunately, this blessing occasionally turns into a curse: programmers with similar ease can also write code using a language construct that has the wrong complexity for the task.
Perhaps the most common demonstration of this phenomenon is unnecessarily using the length/1 built-in function as a test. While
Figure 3. Tidier simplifying the code of disco-0.2/master/src/event_server.erl.
Figure 4. Code with two unnecessary calls to length/1 (from the code of disco-0.2/master/src/disco_server.erl:280).
This is something we have witnessed functional programming novices do also in other functional languages (e.g., in ML), the situation is more acute in Erlang because Erlang allows length/1 to also be used as a guard. While most other guards in Erlang have a constant cost and are relatively cheap to use, the cost of length/1 is proportional to the size of its argument. Erlang programmers sometimes write code which gives the impression that they are totally ignorant of this fact.
Consider the following code excerpt from Erlang/OTP R13B’s lib/xmerl/src/xmerl_validate.erl:542:
```erlang
star(_Rule,XML,_,_WSa,Tree,_S) when length(XML) =:= 0 ->
{[Tree],[]};
star(Rule,XMLs,Rules,WSaction,Tree,S) ->
... % recursive case of star function here ...
star(Rule,XMLs2,Rules,WSaction,Tree++WS++[Tree1],S)
end.
```
The use of length/1 to check whether a list is empty is totally unnecessary; Tidier will detect this and transform this code to:
```erlang
star(_Rule,XMLs,_,_WSa,Tree,_S) ->
{[Tree],[]};
star(Rule,XMLs,Rules,WSaction,Tree,S) ->
... % recursive case of star function here ...
star(Rule,XMLs2,Rules,WSaction,Tree++WS++[Tree1],S)
end.
```
thereby changing the complexity of this function from quadratic to linear.
The above is not a singularity. Tidier has discovered plenty of Erlang programs which use length to check whether a list is empty. Occasionally some programs are not satisfied with traversing just one list to check if it is empty but traverse even more, as in the code excerpt in Figure 4. Tidier will automatically transform the two length/1 guards to exact equalities with the empty list (e.g., AllowedNodes =:= []). Note that this transformation is safe to do because the two lists:filter/2 calls which produce these lists supply Tidier with enough information that the two lists will be proper and therefore the guards will not fail due to throwing some exception.
Tidier has also located a clause with three unnecessary calls to length/1 next to each other. The code is from the latest released version of RefactorErl. Its refactoring is shown in Figure 5. Neither we nor Tidier understand the comment in Hungarian, but we are pretty sure that the whole case statement can be written more simply as:
```erlang
choose_node({PrefNode, TaskBlackNodes}) ->
... % and choose the ones that are not 100% busy.
AvailableNodes = lists:filter(fun({Node, _Load}) ->
... end, AllNodes),
AllowedNodes = lists:filter(fun({Node, _Load}) ->
... end, AvailableNodes),
if length(AvailableNodes) == 0 -> busy;
length(AllowedNodes) == 0 ->
(all_bad, length(TaskNodes), length(AllNodes));
true ->
% Pick the node with the lowest load.
[{Node, _}|_] = lists:keysort(2, AllowedNodes),
Node
end;
...
end.
```
thereby saving five lines of code (eight if one also includes the comments) and also avoiding the unnecessary tuple construction and deconstruction.
Similar cases also exist which check whether a list contains just one or more than one elements (e.g., length(L) > 1). Whenever relatively easy to do, Tidier transforms them as in the case shown below (from the code of lib/ssl/src/ssl_server.erl:1139) where Tidier has also eliminated the call to hd/1 as part of the transformation.
```
decode_msg(<<_, Bin/binary>>, Format) ->
Dec = ssl_server:dec(Format, Bin),
if length(Dec) == 1 -> hd(Dec);
true -> list_to_tuple(Dec)
end.
```
⇓
```
decode_msg(<<_, Bin/binary>>, Format) ->
Dec = ssl_server:dec(Format, Bin),
case Dec of
[Dec1] -> Dec1;
_ -> list_to_tuple(Dec)
end.
```
In some other cases though, the code also contains other guard checks which complicate the transformation. For example, consider function splice/1 from the source code of ErlIDE (located in file org.erlide.core/erl/pprint/erlide.pperl.erl:171):
splice(L) ->
Res = splice(L, [], []),
case (length(Res) == 1) and is_list(hd(Res)) of
true -> no;
_ -> {yes, Res}
end.
Automatically transforming such code to something like the following is future work:
splice(L) ->
Res = splice(L, [], []),
case Res of
[Res1] when is_list(Res1) -> no;
_ -> {yes, Res}
end.
We intend to enhance tidier with more refactorings that detect programming idioms with wrong complexity for the task and improve programs in similar ways.
4. Effectiveness Across Applications
We have applied tidier to a considerable corpus of Erlang programs both in order to ensure that our tool can gracefully handle most Erlang code out there and in order to test its effectiveness. In this section we report our experiences and the number of opportunities for code cleanups detected by tidier on the code of the following open source projects:
1. **Erlang/OTP**
This system needs no introduction. We just mention that we report results on the source code of R13B totalling about 1,240,000 lines of Erlang code. Many of its applications under lib (e.g., hipe, dialyzer, typer, stdlib, kernel, compiler, edoc, and syntax_tools) had already been fully or partially cleaned up by tidier. Consequently, the number of opportunities for cleanups would have been even higher if such cleanups had not already taken place.
2. **Apache CouchDB**
is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/J
1 Throughout its development, we have also applied tidier to its own source code but, since we have been performing the cleanups which tidier were suggesting eagerly, we cannot include tidier in the measurements.
Son API. The CouchDB distribution contains ibrowse and mochiweb as components. We used release 0.9.0 which contains about 20,500 lines of Erlang code.
Disco is an implementation of the Map/Reduce framework for distributed computing. We used version 0.2 of Disco. Its core is written in Erlang and consists of about 2,500 lines of code.
Ejabberd is a Jabber/XMPP instant messaging server that allows two or more people to communicate and collaborate in real-time based on typed text. We used the development version of ejabberd from the public SVN repository of the project (revision 2074) consisting of about 55,000 lines of Erlang code.
Erlang Web is an open source framework for applications based on HTTP protocols. Erlang Web supports both inets and yaws webservers. The source of Erlang Web (version 1.3) is about 10,000 lines of code.
RefactorErl is a refactoring tool that supports the semi-automatic refactoring of Erlang programs. We used the latest release of RefactorErl (version 0.6). Its code base consists of about 24,000 lines of code.
Scalair is a scalable, transactional, distributed key-value store which can be used for building scalable Web 2.0 services. We used the development version of scalair from the public SVN repository of the project (revision 278) consisting of about 35,000 lines of Erlang code. This includes the contrib directory of scalair where the source code of Yaws is also included as a component.
Wings 3D is a subdivision modeler for three-dimensional objects. We used the development version of wings from the public SVN repository of the project (revision 608) consisting of about 112,000 lines of Erlang code. This includes its contrib directory.
Wrangler is a refactoring tool that supports the semi-automatic refactoring of Erlang programs. We used the development version of Wrangler from the public SVN repository of the project (revision 678) consisting of about 42,000 lines of Erlang code.
Throughout its development, we have also applied tidier to its own source code but, since we have been performing the cleanups which tidier were suggesting eagerly, we cannot include tidier in the measurements.
For all projects with SVN repositories the revisions we mention correspond to the most recent revision on the 12th of May 2009.
The number of opportunities for tidier’s transformations on these code bases is shown on Table 1. From these numbers alone, it should be obvious that detecting, let alone actually performing, all these refactorings manually is an extremely strenuous and possibly also error-prone activity. Tidier, even if employed only as a detector of bad code smells, is worth the effort of typing its name on the command line.
Naturally, the number of opportunities for refactorings that tidier recognizes depends on two parameters: size and programming style of a project’s code. As expected, the number of refactoring opportunities on the Erlang/OTP system is much bigger in absolute terms than on all the other code bases combined. This is probably due to the size of the code base and probably also due to the fact that some applications of Erlang/OTP were developed by many different programmers, often Erlang old-timers, over a period of years. But we can also notice that it’s not only code size that matters. The table also shows smaller code bases offering more opportunities for refactoring than code bases of bigger size.
What Table 1 does not show is tidier’s effectiveness. For some columns of the table (e.g., new guards, record matches) tidier’s effectiveness is 100% by construction, meaning that tidier will detect all opportunities for these refactorings and perform them if requested to do so. For some other columns of the table (e.g., lists:keysearch/3, map and filter to list comprehension, structure reuse, case simplify) tidier can detect all opportunities for these refactorings but might not perform them based on heuristics which try to guess the intentions of programmers or take aesthetic aspects of code into account. For some refactorings, especially those for which type information is required, tidier’s effectiveness is currently not as good as we would want it to be. (We will come back to this point in the next section.)
Table 2 contains numbers and percentages of numeric comparisons with == and /= that are transformed to their exact counterparts and numbers and percentages of calls to size/1 that get transformed to byte_size/1 or tuple_size/1. As can be seen, tidier’s current analysis is pretty effective in detecting opportunities of transforming calls to size/1 but quite ineffective when it comes to detecting opportunities for transforming coercing equalities and inequations. A global type analysis would definitely improve the situation in this case. (However, bear in mind that achieving 100% on all programs is impossible since there are uses of ==/2 or size/1 that cannot be transformed to something else, even if tidier were guided by an oracle.)
5. Conservatism of Refactorings
Despite the significant number of refactorings that tidier performs on existing code bases, we stress again that tidier is currently ultra conservative and careful to respect the operational semantics of Erlang. In particular, tidier will never miss an exception that programs may generate, whether deliberately or not.
To understand the exact consequences of this, we show a case from the code of lib/edoc/src/otpsgml_layout.erl:148 from Erlang/OTP R13B. The code on that line reads:
Functions = [E || E <- get_content(functions, Es)],
Although to a human reader it is pretty clear that this code is totally redundant and the result of sloppy code evolution from similar code (actually from the code of lib/edoc/src/edoc_layout.erl), tidier cannot simplify this code to:
Functions = get_content(functions, Es),
because this transformation will shut off an exception in case function get_content/2 returns something other than a proper list. To do this transformation, type information about the result of get_content/2 is required. Currently, tidier is guided only by a function-local type analysis. Extending this analysis to the module level is future work.
Type information can also come in very handy in rewriting calls to \texttt{lists:map/2} and \texttt{lists:filter/2} to more succinct list comprehensions. Without type information, \texttt{tidier} performs the following transformation:
\[
\texttt{foo}(\text{Ps}) \rightarrow \texttt{lists:map}(\texttt{fun} \ ((X,Y)) \rightarrow X + Y \text{ \ end}, \text{Ps}).
\]
\[
\downarrow
\]
\[
\texttt{foo}_1(\text{Ps}) \rightarrow [\texttt{foo}_0(P) \mid P \leftarrow \text{Ps}],
\]
\[
\texttt{foo}_0((X,Y)) \rightarrow X + Y.
\]
and cannot inline the body of the auxiliary function and generate the following code:
\[
\texttt{foo}(\text{Ps}) \rightarrow [X + Y \mid (X,Y) \leftarrow \text{Ps}].
\]
because this better refactoring requires definite knowledge that \text{Ps} is a list of pairs. Similar issues exist for refactorings involving \texttt{lists:filter/2}. Despite being conservative, \texttt{tidier} is pretty effective. In the code of Erlang/OTP R13B, out of the 679 refactorings of \texttt{lists:map/2} and \texttt{lists:filter/2} to list comprehensions a bit more than half of them (347) actually use the inlined translation.
We mentioned that \texttt{tidier} currently performs deforestation for combinations of \texttt{map} and \texttt{filter}. A similar deforestation of \texttt{map+filter} combinations, namely the transformation:
\[
\text{L1} = \text{lists:map}((X) \rightarrow \text{m1:foo}(X), \text{L0}),
\]
\[
\text{L2} = \text{lists:map}((X) \rightarrow \text{m2:bar}(X), \text{L1})
\]
\[
\downarrow
\]
\[
\text{L2} = [\text{m2:bar}(\text{m1:foo}(X)) \mid X \leftarrow \text{L0}]
\]
as also shown in the arrow is \emph{not} performed by \texttt{tidier} because this requires an analysis which determines that functions \texttt{m1:foo/1} and \texttt{m2:bar/1} are side-effect free. Again, hooking \texttt{tidier} to such an analysis is future work.
6. Concluding Remarks
This paper described opportunities for automatically modernizing Erlang applications, cleaning them up, eliminating certain bad smells from their code, and occasionally also improving their performance. In addition, we presented concrete examples of code improvements and our experiences from using \texttt{tidier} on code bases of significant size.
As mentioned, \texttt{tidier} is completely automatic as a refactoring tool but with equal ease can be used as a detector of opportunities for code cleanups and simplifications. Tools that aid software development, such as code refactorers, have their place in all languages, but it appears that higher-level languages such as Erlang are particularly suited for making the cleanup process fully or mostly automatic. We intend to explore this issue more.
Acknowledgements
We thank Richard Carlsson, Björn Gustavsson, and Kenneth Lundin for supportive comments and suggestions for refactorings. We also thank Dan Gudmundsson: without the use of his \texttt{wx} application, the user interface of \texttt{tidier} would have taken longer to write and would probably look less aesthetically pleasing.
Finally, we thank all developers of projects mentioned in this paper for publicly releasing their code as open source and giving us plenty of opportunities to find nice examples for our paper.
References
|
{"Source-Url": "http://users.ece.cmu.edu/~aavgerin/papers/Erlang09.pdf", "len_cl100k_base": 9214, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32581, "total-output-tokens": 10816, "length": "2e13", "weborganizer": {"__label__adult": 0.00030803680419921875, "__label__art_design": 0.00022554397583007812, "__label__crime_law": 0.0002071857452392578, "__label__education_jobs": 0.0004754066467285156, "__label__entertainment": 4.661083221435547e-05, "__label__fashion_beauty": 0.00010347366333007812, "__label__finance_business": 0.0001366138458251953, "__label__food_dining": 0.0002422332763671875, "__label__games": 0.0003781318664550781, "__label__hardware": 0.0003943443298339844, "__label__health": 0.0002541542053222656, "__label__history": 0.00013935565948486328, "__label__home_hobbies": 6.002187728881836e-05, "__label__industrial": 0.00019097328186035156, "__label__literature": 0.00019228458404541016, "__label__politics": 0.00016295909881591797, "__label__religion": 0.00032019615173339844, "__label__science_tech": 0.0025081634521484375, "__label__social_life": 7.903575897216797e-05, "__label__software": 0.00423431396484375, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002186298370361328, "__label__transportation": 0.0002887248992919922, "__label__travel": 0.0001518726348876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43531, 0.01112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43531, 0.45708]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43531, 0.88931]], "google_gemma-3-12b-it_contains_pii": [[0, 5085, false], [5085, 9285, null], [9285, 14198, null], [14198, 20698, null], [20698, 26064, null], [26064, 30144, null], [30144, 34045, null], [34045, 38066, null], [38066, 43531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5085, true], [5085, 9285, null], [9285, 14198, null], [14198, 20698, null], [20698, 26064, null], [26064, 30144, null], [30144, 34045, null], [34045, 38066, null], [38066, 43531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43531, null]], "pdf_page_numbers": [[0, 5085, 1], [5085, 9285, 2], [9285, 14198, 3], [14198, 20698, 4], [20698, 26064, 5], [26064, 30144, 6], [30144, 34045, 7], [34045, 38066, 8], [38066, 43531, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43531, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
6eea21c467797996cc0f59920392b57e9714be28
|
[REMOVED]
|
{"Source-Url": "https://kar.kent.ac.uk/43204/1/JournalCloudPrePublicationVersion.pdf", "len_cl100k_base": 9542, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 39224, "total-output-tokens": 11440, "length": "2e13", "weborganizer": {"__label__adult": 0.0003199577331542969, "__label__art_design": 0.0004625320434570313, "__label__crime_law": 0.0006623268127441406, "__label__education_jobs": 0.0015773773193359375, "__label__entertainment": 8.344650268554688e-05, "__label__fashion_beauty": 0.00018262863159179688, "__label__finance_business": 0.001079559326171875, "__label__food_dining": 0.0003173351287841797, "__label__games": 0.00040602684020996094, "__label__hardware": 0.0012922286987304688, "__label__health": 0.0006613731384277344, "__label__history": 0.0003447532653808594, "__label__home_hobbies": 0.00014662742614746094, "__label__industrial": 0.00041556358337402344, "__label__literature": 0.0002703666687011719, "__label__politics": 0.000347137451171875, "__label__religion": 0.0003452301025390625, "__label__science_tech": 0.13134765625, "__label__social_life": 0.00018918514251708984, "__label__software": 0.04156494140625, "__label__software_dev": 0.8173828125, "__label__sports_fitness": 0.00017642974853515625, "__label__transportation": 0.00046539306640625, "__label__travel": 0.00019550323486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49618, 0.02729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49618, 0.17059]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49618, 0.93833]], "google_gemma-3-12b-it_contains_pii": [[0, 346, false], [346, 2820, null], [2820, 6566, null], [6566, 10640, null], [10640, 14289, null], [14289, 17435, null], [17435, 20525, null], [20525, 24058, null], [24058, 27857, null], [27857, 31962, null], [31962, 35499, null], [35499, 39563, null], [39563, 43069, null], [43069, 45359, null], [45359, 47896, null], [47896, 49358, null], [49358, 49358, null], [49358, 49487, null], [49487, 49548, null], [49548, 49618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 346, true], [346, 2820, null], [2820, 6566, null], [6566, 10640, null], [10640, 14289, null], [14289, 17435, null], [17435, 20525, null], [20525, 24058, null], [24058, 27857, null], [27857, 31962, null], [31962, 35499, null], [35499, 39563, null], [39563, 43069, null], [43069, 45359, null], [45359, 47896, null], [47896, 49358, null], [49358, 49358, null], [49358, 49487, null], [49487, 49548, null], [49548, 49618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49618, null]], "pdf_page_numbers": [[0, 346, 1], [346, 2820, 2], [2820, 6566, 3], [6566, 10640, 4], [10640, 14289, 5], [14289, 17435, 6], [17435, 20525, 7], [20525, 24058, 8], [24058, 27857, 9], [27857, 31962, 10], [31962, 35499, 11], [35499, 39563, 12], [39563, 43069, 13], [43069, 45359, 14], [45359, 47896, 15], [47896, 49358, 16], [49358, 49358, 17], [49358, 49487, 18], [49487, 49548, 19], [49548, 49618, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49618, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
fd7f15059baacc0cb2a007fa202cdd422d21ac11
|
[REMOVED]
|
{"len_cl100k_base": 12288, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40990, "total-output-tokens": 14968, "length": "2e13", "weborganizer": {"__label__adult": 0.0011606216430664062, "__label__art_design": 0.001010894775390625, "__label__crime_law": 0.00042057037353515625, "__label__education_jobs": 0.016082763671875, "__label__entertainment": 0.0003025531768798828, "__label__fashion_beauty": 0.0003421306610107422, "__label__finance_business": 0.001293182373046875, "__label__food_dining": 0.0005974769592285156, "__label__games": 0.0022182464599609375, "__label__hardware": 0.0006437301635742188, "__label__health": 0.0007243156433105469, "__label__history": 0.0003020763397216797, "__label__home_hobbies": 0.0002262592315673828, "__label__industrial": 0.0003204345703125, "__label__literature": 0.0009293556213378906, "__label__politics": 0.0007114410400390625, "__label__religion": 0.0007205009460449219, "__label__science_tech": 0.00339508056640625, "__label__social_life": 0.0011196136474609375, "__label__software": 0.006671905517578125, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0009379386901855468, "__label__transportation": 0.0006842613220214844, "__label__travel": 0.00040984153747558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64767, 0.03826]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64767, 0.35041]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64767, 0.94154]], "google_gemma-3-12b-it_contains_pii": [[0, 4241, false], [4241, 10673, null], [10673, 17465, null], [17465, 20229, null], [20229, 27192, null], [27192, 33643, null], [33643, 36956, null], [36956, 43369, null], [43369, 49785, null], [49785, 56621, null], [56621, 59804, null], [59804, 64767, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4241, true], [4241, 10673, null], [10673, 17465, null], [17465, 20229, null], [20229, 27192, null], [27192, 33643, null], [33643, 36956, null], [36956, 43369, null], [43369, 49785, null], [49785, 56621, null], [56621, 59804, null], [59804, 64767, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64767, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64767, null]], "pdf_page_numbers": [[0, 4241, 1], [4241, 10673, 2], [10673, 17465, 3], [17465, 20229, 4], [20229, 27192, 5], [27192, 33643, 6], [33643, 36956, 7], [36956, 43369, 8], [43369, 49785, 9], [49785, 56621, 10], [56621, 59804, 11], [59804, 64767, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64767, 0.16098]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
39f618e59703735faeaae2c57519f97c85a01d46
|
TheyBuyForYou Platform and Knowledge Graph: Expanding Horizons in Public Procurement with Open Linked Data
Ahmet Soylu a,b,*, Oscar Corcho c, Brian Elvesæter a, Carlos Badenes-Olmedo c, Tom Blount d, Francisco Yedro Martínez e, Matej Kovacic e, Matej Posinkovic e, Ian Makgill f, Chris Taggart g, Elena Simperl h, Till C. Lech a and Dumitru Roman a
a SINTEF AS, Norway E-mails: ahmet.soylu@sintef.no, brian.elvesater@sintef.no, till.lech@sintef.no, dumitru.roman@sintef.no
b OsloMet – Oslo Metropolitan University, Norway E-mail: ahmet.soylu@oslomet.no
c Universidad Politécnica de Madrid, Spain E-mails: ocorcho@fi.upm.es, cbadenes@fi.upm.es, fyedro@fi.upm.es
da University of Southampton, The UK E-mail: t.blount@soton.ac.uk
e Jožef Stefan Institute, Slovenia E-mails: matej.kovacic@ijs.si, matej.posinkovic@ijs.si
f OpenOpps Ltd, The UK E-mail: ian@spendnetwork.com
g OpenCorporates Ltd, The UK E-mail: chris.taggart@opencorporates.com
h King’s College London, London, The UK E-mail: elena.simperl@kcl.ac.uk
Abstract. Public procurement is a large market affecting almost every organisation and individual; therefore, governments need to ensure its efficiency, transparency, and accountability, while creating healthy, competitive, and vibrant economies. In this context, open data initiatives and integration of data from multiple sources across national borders could transform the procurement market by such as lowering the barriers of entry for smaller suppliers and encouraging healthier competition, in particular by enabling cross-border bids. Increasingly more open data is published in the public sector; however, these are created and maintained in siloes and are not straightforward to reuse or maintain because of technical heterogeneity, lack of quality, insufficient metadata, or missing links to related domains. To this end, we developed an open linked data platform, called TheyBuyForYou, consisting of a set of modular APIs and ontologies to publish, curate, integrate, analyse, and visualise an EU-wide, cross-border, and cross-lingual procurement knowledge graph. We developed advanced tools and services on top of the knowledge graph for anomaly detection, cross-lingual document search, and data storytelling. This article describes the TheyBuyForYou platform and knowledge graph, reports their adoption by different stakeholders and challenges and experiences we went through while creating them, and demonstrates the usefulness of Semantic Web and Linked Data technologies for enhancing public procurement.
Keywords: Public procurement, knowledge graph, linked data, open data, ontology
1. Introduction
The market around public procurement is large enough so as to affect almost every single citizen and organisation across a variety of sectors. For this reason, public spending has always been a matter of interest at local, regional, and national levels, and even more so, in times of great austerity and increased public scrutiny. Primarily, governments need to be efficient in delivering services, ensure transparency, prevent fraud and corruption, and build healthy and sustainable economies [1, 2]. For example in the European Union (EU), every year, over 250,000 public authorities spend around 2 trillion euros (about 14% of GDP) on the purchase of services, works, and supplies; while the Organisation for Economic Co-operation and Development (OECD) estimates that more than 82% of fraud and corruption cases remain undetected across all OECD countries [3] costing as high as 990 billion euros a year in the EU alone [4]. Moreover, small and medium-sized enterprises (SMEs) are often locked out of markets and restricted by borders due to the high cost of obtaining the required information, where larger companies can absorb the cost. This leads to a tendency for governments to rely on monolithic suppliers without adequate competition to deliver good value for the taxpayers.
The availability of good quality, open, and integrated procurement data, coming from multiple sources across national borders, could alleviate the aforementioned challenges [5]. This includes government agencies assessing purchasing options, companies exploring new business contracts and placing cross-border bids, and other parties (such as journalists, researchers, local communities, business associations, transparency activists, and individual citizens) looking for a better understanding of the intricacies of the public procurement landscape through decision-making and analytic tools. Free access to public sector information is now a human right, recognised by many developed and developing countries [6]. Projects such as the UK’s GCloud (Government Cloud) have already shown that small businesses can compete effectively with their larger counterparts, given the right environment. However, managing these competing priorities at a national level and coordinating them across different states and many disparate agencies is notoriously difficult. There are several directives put forward by the European Commission (e.g., Directive 2003/98/EC and Directive 2014/24/EU) for improving public procurement practices. These led to the emergence of national public procurement portals living together with regional, local as well as EU-wide public portals [7] with a lack of common agreement across the EU on the data formats for exposing such data sources and on the data models for representing such data, leading to a highly heterogeneous technical landscape. As a result, increasingly more open data is being published in the public sector; however, these are created and maintained in siloes and are not straightforward to reuse or maintain due to lack of quality, insufficient metadata, missing links to related domains, as well as the technical heterogeneity.
To this end, in order to deal with the aforementioned challenges, we built a platform, called TheyBuyForYou [8], consisting of a set of modular REST APIs and ontologies, to publish, curate, integrate, analyse, and visualise an EU-wide, cross-border, and cross-lingual procurement knowledge graph [8–11] (i.e., KG, an interconnected semantic knowledge organisation structure [12, 13]). The KG includes procurement and company data gathered from multiple disparate sources across the EU and integrated through a common ontology network using an extract, transform, load (ETL) approach [14]. We developed and used a set of advanced end-user tools and services including machine learning (ML) algorithms on top of the resulting knowledge graph, so as to find anomalies in data, enable searching across documents in different languages, and create stories from the data. This article describes the TheyBuyForYou platform and knowledge graph, reports their adoption by different stakeholders and challenges and experiences we went through while creating them, and demonstrates the usefulness of Semantic Web and Linked Data technologies for enhancing public procurement.
The rest of the article is structured as follows. Section 2 motivates the overall work presented, while Section 3 presents the related work. Section 4 describes...
Public sector procurement platforms have largely been transferred from private sector tools that were deployed in the manufacturing sector. During this transfer, very little consideration has been given to aspects such as software integration and interoperability, transparency, or the specific needs of governments. As a result, many of the tools in use by governments are often not optimised for government use, or are subject to restrictive contracts which unnecessarily complicate publishing open data. For example, for the management of UK’s and Germany’s procurement data, the business intelligence supplier Dun & Bradstreet includes proprietary identifiers (DUNS ID) for all government suppliers in their spend analysis tools - which means that the data cannot be reused without a subscription to Dun & Bradstreet\(^3\).
Tender advertising portals are also hampering the progress of transparency because the portals are claiming copyright over all data published in the portals, even though their public-sector clients are the authors and the data on tender opportunities are required to be published openly by law. The technical landscape for managing such contracts is very heterogeneous: for example, even in medium-sized cities, contracts are handled using different tools and formats across departments, including relational databases, Excel sheets, and Lotus Notes. This makes it difficult to have a high-level overview of processes and decisions. Furthermore, proprietary data formats and restrictive contracts also create supplier tie-in, making it difficult for governments to take their custom to rival suppliers or to create their solutions. This raises costs, disenfranchises citizens and makes it harder to compare the value for money delivered by different suppliers.
These solutions have important limitations: relevant data is missing, of sub-par quality or hardly accessible, and the technology and tools used by decision makers to explore and engage with it are rudimentary in the level of detail and actionable insight they offer. In this respect, open data initiatives and a standard based approach for sharing and integrating procurement related data could transform the public procurement, notably in terms of:
(i) economic development by delivering better economic outcomes from public spending, in particular for SMEs (to get better access to public tenders, competing with more established players etc.);
(ii) demand management by spotting trends in spending and supplier management to achieve long-term goals such as cost savings and efficiency gains;
(iii) competitive markets by identifying areas for cost cuts through healthier competition;
(iv) and, procurement intelligence by producing advanced analytics to inform decision support, risk monitoring and supply market analysis for procurement managers.
To manage its spending, governments on both local and European level produce a constant stream of documents (e.g., tender notices and award notices) of overwhelming volume as part of each contracting process. A typical process is composed of several stages such as tender, award, and contract with associated relevant notices, which are commonly published in the official language of the respective country. To facilitate a global overview of spending activity, automatic means for integrating and analysing this data stream are necessary. TheyBuyForYou platform contributes this transformation through:
(i) a combination of open procurement APIs, and online services and tools, to be used by different stakeholders for various data management processes;
(ii) making existing data more useful by adding more structure to it, linking it to various sources and vocabularies, resolving heterogeneity, and turning it into a knowledge graph that could be systematically analysed;
(iii) cross-lingual search and anomaly detection techniques to search and discover patterns and anomalies across multiple data sets and languages.
\(^3\)https://www.dnb.co.uk
(iv) data storytelling techniques for generating informative summaries of the analysis results to aid decision making.
3. Related work
We focus on procurement data, related to tenders, awards, and contracts, and basic company data. In the followings, we analyse relevant related works from the perspective of such types of data. Procurement and company data are fundamental to realising many key business scenarios and may be extended with additional data sources.
Public procurement notices play two important roles for the public procurement process: as a resource for improving competitive tendering, and as an instrument for transparency and accountability [15]. With the progress of eGovernment initiatives, the publication of information on contracting procedures is increasingly being done using electronic means. In return, a growing amount of open procurement data is being released leading to various standardisation initiatives like OpenPEPPOL[^4], CENBII[^5], TED eEnders[^6], CODICE[^7], and Open Contracting Data Standard (OCDS[^8]). Data formats and file templates were defined within these standards to structure the messages being exchanged by the various agents involved in the procurement process. These standards primarily focus on the type of information that is transmitted between the various organisations involved in the process, aiming to achieve certain interoperability in the structure and semantics of data. The structure of the information is commonly provided by the content of the documents that are exchanged. However, these initiatives still generate a lot of heterogeneity. In order to alleviate these problems, several ontologies including PPROC[^16], LOTED2[^17], MOLDEAS[^18], or PCO[^19], as well as the upcoming eProcurement ontology[^9] emerged, with different levels of detail and focus (e.g., legal and process-oriented). So far, however, none of them has reached a wide adoption mainly due to their limited practical value.
Corporate information, including basic company information, financial as well as contextual data, are highly relevant in the procurement context, not only for enabling many data value chains, but also for transparency and accountability. Recently, a number of initiatives have been established to harmonise and increase the interoperability of corporate and financial data. These include public initiatives such as the Global Legal Entity Identification System—GLEIS[^10], Bloomberg’s open FIGI system for securities[^11], as well as long-established proprietary initiatives such as the Dun & Bradstreet DUNS number[^12]. Other notable initiatives include the European Business Register (EBR[^13]), Business Register Exchange (BREX[^14]), and the eXtensible Business Reporting Language (XBRL[^15]) format[^15]. However, these are mostly fragmented across borders, limited in scope and size, and siloed within specific business communities. There are also a number of ontologies developed for capturing company and company-related data including the W3C Organisation ontology (ORG[^16]), some e-Government Core Vocabularies[^17], and the Financial Industry Business Ontology (FIBO[^20]). They have varying focuses (e.g., organisational and financial), do not cover sufficiently the basic company information, or are too complex due to many ontological commitments [21].
To date, no platform or KG exists (in whatever form), linking and provisioning cross-border and cross-language procurement and company data and allowing advanced decision making, analytics, and visualisation.
4. TheyBuyForYou platform
The TheyBuyForYou platform is mainly composed of components for ingesting, integrating, curating, and publishing procurement and supplier (i.e., company) data. The relevant data sets are reconciled and mapped into RDF with respect to an ontology network in order to create a knowledge graph[^22]. In the followings, we describe the main ingredients of the platform.
[^4]: https://peppol.eu
[^5]: http://cnenb.eu
[^7]: https://contrataciondelestado.es/wps/portal/codice
[^8]: http://standard.open-contracting.org
[^10]: https://www.gleif.org
[^11]: https://www.omi.org/figi
[^12]: http://www.dnb.com/duns-number.html
[^13]: http://www.ebr.org
[^14]: https://brex.io
[^15]: https://www.xbl.org
[^16]: https://www.w3.org/TR/vocab-org
4.1. Data providers
The content of our KG is based on the procurement and company data that is provided by two main data providers extracting and aggregating data from multiple sources. The first one is OpenOpps\textsuperscript{18}, which is sourcing procurement data primarily from the Tenders Electronic Daily (TED)\textsuperscript{19} data feed and from the procurement transparency initiatives of individual countries. TED is dedicated to European public procurement and publishes 520 thousand procurement notices a year. The second provider is OpenCorporates\textsuperscript{20}, which is collecting company data from national company registers and other regulatory sources. OpenOpps is the largest data source of European tenders and contracts, while OpenCorporates is the largest open database of companies in the world. Both OpenOpps and OpenCorporates gather relevant data using a range of tools, including processing API calls and Web scraping and data extraction.
Regarding the procurement data, in the context of this work, OpenOpps provides gathered, extracted, pre-processed, and normalised data from hundreds of data sources completely openly through an API that can be used for research purposes. OpenOpps currently handles 685 data sources, with 569 of these being from Europe. This totals over 3 million documents dating back to 2010. All of the data for OpenOpps is gathered using a series of over 400 different scripts configured to collect data from each source. Each script is triggered daily and runs to gather all of the documents published in the last twenty-four hours. Each script is deployed on a monitored platform, giving the ability to check which scripts have failed, or which sources have published fewer than expected. Data is collected in the raw form and then mapped to the OCDS format after being cleansed. Where necessary, the data is processed, e.g., splitting single records into several fields, to comply with the data standard. Regarding the company data, OpenCorporates provides more than 140 million company records from a large number of jurisdictions\textsuperscript{21}. OpenCorporates pre-processes and normalises data collected, maps collected data to its own data model, and makes data available through an API.
The data collected from OpenOpps and OpenCorporates is openly available under the Open Database License (ODbl)\textsuperscript{22}. It is available on GitHub\textsuperscript{23} in JSON format and is updated on a monthly basis. The data is also made available through Zenodo\textsuperscript{24} with a digital object identifier (DOI)\textsuperscript{23}. As of October 2020, the size of released data amounts to 3 GBs and dates back to January 2019.
4.2. Ontology network
We developed two ontologies, one for representing procurement data and one for company data, using common techniques recommended by well-established ontology development methods [24, 25]. A bottom-up approach was used, including identifying the scope and user group of the ontology, requirements, and ontological and non-ontological resources. We address suppliers, buyers, data journalists, data analysts, control authorities and regular citizens to explore and understand how public procurement decisions affect economic development, efficiencies, competitiveness, and supply chains. This includes providing better access to public tenders; spotting trends in spending and supplier management; identifying areas for cost cuts; and producing advanced analytics.
Regarding the procurement data, we developed an ontology based on OCDS [26] – a relevant data model getting important traction worldwide, used for representing our underlying procurement data. The OCDS’ data model is organised around the concept of a contracting process, which gathers all the relevant information associated with a single initiation process in a structured form. Phases of this process include mainly planning, tender, award, contract, and implementation. An OCDS document may be one of two kinds: a release or a record. A release is basically associated to an event in the lifetime of a contracting process and presents related information, while a record compiles all the known information about a contracting process. A contracting process may have many releases associated but only one record. We went through the reference specification of OCDS release and interpreted each of the sections and extensions (i.e., structured and unstructured). In total, there are currently 25 classes, 69 object properties, and 81 datatype properties created from the four main OCDS sections and 11 extensions (see Figure 1 for a fragment of the ontology). The core classes are:
\begin{itemize}
\item \texttt{https://openoppps.com}
\item \texttt{https://ted.europa.eu}
\item \texttt{https://opencorporates.com}
\item \texttt{https://opencorporates.com/registers}
\item \texttt{https://opendatacommons.org/licenses/odbl}
\item \texttt{https://github.com/TBFY/data-sources}
\item \texttt{https://zenodo.org}
\end{itemize}
Fig. 1. A fragment of the OCDS ontology.
- ContractingProcess,
- Plan,
- Tender,
- Award, and
- Contract.
A contracting process may have one planning and one tender stage. Each tender may have multiple awards issued, while there may be only one contract issued for each award. Other ontology classes include Item, Lot, Bid, Organisation, and Transaction. We reused terms from external vocabularies and ontologies where appropriate. These include Dublin Core\textsuperscript{25}, FOAF\textsuperscript{26}, Schema.org\textsuperscript{27}, SKOS\textsuperscript{28}, and the W3C Organisation ontology\textsuperscript{29}. The OCDS ontology is available on GitHub in two versions\textsuperscript{30}: one with the core OCDS terms and another with the extensions.
Regarding the company data, one of the main resources used during the ontology development was
data models provided by four company data providers: OpenCorporates, SpazioDati\(^{31}\), Brønnøysund Register Centre\(^{32}\), and Ontotext\(^{33}\). The data supplied by these data providers originally came from both official sources and unofficial sources. The need for harmonising and integrating data sets was a guiding factor for the ontology development process, since data sets have different sets of attributes and different representations with similar semantics. The resulting ontology, called euBusinessGraph ontology \([21, 27]\), is composed of 20 classes, 33 object properties, and 57 data properties allowing us to represent basic company-related data. The ontology covers registered organisations (i.e., companies that are registered as legal entities), identifier systems (i.e., a company can have several identifiers), officers (i.e., associated officers and their roles), and data sets (i.e., capturing information about data sets that are offered by company data providers). Registered organisations are the main entities for which information is captured in the ontology (see Figure 2 for a fragment of the ontology). The main classes include:
- RegisteredOrganisation
- Identifier
- IdentifierSystem
- Person
- and Dataset.
Three types of classifications are defined in the ontology for representing the company type, company status, and company activity. These are modelled as SKOS concept schemes. Some of the other external vocabularies and ontologies used are W3C Organisation ontology, W3C Registered Organisation Vocabulary (RegOrg)\(^{34}\), SKOS, Schema.org, and Asset Description Metadata Schema (ADMS)\(^{35}\). The ontology,
\(31\)http://spaziodati.eu
\(32\)http://www.brreg.no
\(33\)https://www.ontotext.com
\(34\)https://www.w3.org/TR/vocab-regorg
\(35\)https://www.w3.org/TR/vocab-adms
4.3. Platform architecture
TheyBuyForYou platform follows the state-of-the-art principles in software development, considering a low decoupling amongst all the software components.
Figure 3 provides a high-level overview of the architecture. On the left-hand side, we include the ETL processes that are being used to incorporate the data sources into the KG. On the right-hand side we provide an overview of the main data storage mechanisms, including a triple store for the generated RDF-based data and a document store for the documents associated to public procurement (tender notices, award notices, etc.), whose URLs are accessible via specific properties of the KG (using `rdfs:seeAlso`). For those specific cases where a URI is also available in the original data sources (from OpenOpps and OpenCorporates), such URI is provided in the KG using a statement with `owl:sameAs`. This would allow our data providers to provide additional information about tenders or companies with a different license or access rights (e.g., commercial use).
The KG is accessible via a core REST API. Our API catalogue is mostly focused on providing access mechanisms to those who want to make use of the knowledge graph, particularly software developers. Therefore, they are mostly focused on providing access to the KG through the HTTP GET verb and the API catalogue is organised around the main entities that are relevant for public procurement, such as contracting processes, awards, and contracts. Since the KG is stored as RDF in a triple store, there is also a SPARQL endpoint for executing ad-hoc queries. Finally, there is a cross-lingual search API for searching across documents in various languages and an API Gateway providing a single-entry point to the APIs provided by the platform.
5. KG provisioning
The KG provisioning encompasses processes for reconciling and linking the two aforementioned and originally disconnected data sets, mapping and translating them into Linked Data with respect to an ontology network, and publishing the resulting knowledge graph through several APIs and endpoints. We describe the ingestion and publication processes in what follows.
5.1. Data ingestion
The ingestion process extracts procurement and company data from the data providers, matches suppliers appearing in procurement data against company
1. Download procurement data
2. Reconcile supplier data
3. Enrich JSON data
4. Convert JSON to XML
5. Map XML data to RDF
6. Publish RDF to database
Fig. 4. The daily data ingestion process for the KG.
data (i.e., reconciliation), and translates the data sets into RDF using RML. The daily process is composed of the following steps (see Figure 4):
(1) Download procurement data: Downloads procurement data from the OpenOpps OCDS API as JSON data files.
(2) Reconcile suppliers: Matches supplier records in awards using the OpenCorporates Reconciliation API. The matching company data is downloaded using the OpenCorporates Company API as JSON data files.
(3) Enrich downloaded JSON data: Enriches the JSON data files downloaded in steps 1 and 2, e.g., adding new properties to support the mapping to RDF (e.g., fixing missing identifiers).
(4) Convert JSON to XML: Converts the JSON data files from step 3 into corresponding XML data files. Due to limitations in JSONPath, i.e., lack of operations for accessing parent or sibling nodes from a given node, we prefer to use XPath as the query language in RML.
(5) Map XML data to RDF: Runs RML Mapper on the enriched XML data files from step 4 and produces N-Triples files.
(6) Store and publish RDF: Stores the RDF (N-Triples) files from step 5 to Apache Jena Fuseki and Apache Jena TDB.
Python was used as the primary scripting language, RMLMapper was used as the mapping tool to generate RDF, and finally Apache Jena Fuseki & TDB was chosen as the SPARQL engine and triple store. The Python scripts operate on files (output and input) and services were dockerised using Docker and made available on Docker Hub to ease deployment. All development work and results towards the creation of the knowledge graph are published and maintained as open source software on GitHub. The data dumps of the KG are available on Zenodo and as of October 2020 includes:
- 139M statements,
- 1.46 million tenders,
- 1.72 million awards,
- and 103 thousand companies (for matching suppliers in awards).
The data in the knowledge graph is updated on a daily basis, while a new data dump is uploaded to Zenodo on a monthly basis.
5.2. KG publication
The KG is published through a SPARQL endpoint, a REST-based knowledge graph API (i.e., KG API), a cross-lingual search API, and an API gateway as a single access point for all. The SPARQL endpoint and API gateway are accessible online. In this section, we focus on the knowledge graph API, while cross-lingual search API is covered in Section 6 as part of advanced services and tools.
The knowledge graph API (see GitHub) was built using the R4R tool. This tool is based on Velocity templates and allows specifying how the REST API will look like and configure it by means
---
37 https://rml.io
38 https://openopps.com/api/tbfy/ocds
39 https://api.opencorporates.com/documentation/
40 Open-Refine-Reconciliation-API
41 https://api.opencorporates.com/documentation/
42 API-Reference
43 https://hub.docker.com/r/tbfy/kg-ingestion-service
44 https://github.com/TBFY/knowledge-graph
45 http://data.tbfy.eu/sparql
46 http://data.tbfy.eu
47 https://github.com/TBFY/api-gateway
48 http://data.tbfy.eu/kg-api
49 https://github.com/TBFY/knowledge-graph-API
50 https://github.com/TBFY/r4r
51 https://velocity.apache.org
A. Soylu et al. / TheyBuyForYou Platform and Knowledge Graph: Expanding Horizons in Public Procurement with Open Linked Data
Table 1: The knowledge graph API developed around the main resources.
<table>
<thead>
<tr>
<th>URI Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>/contractingProcess</td>
<td>Gets a list of contracting processes</td>
</tr>
<tr>
<td>/contractingProcess/{id}</td>
<td>Finds a contracting process by ID</td>
</tr>
<tr>
<td>/contractingProcess/{id}/award</td>
<td>Awards of a contracting process to return</td>
</tr>
<tr>
<td>/contractingProcess/{id}/buyer</td>
<td>Buyers of a contracting process to return</td>
</tr>
<tr>
<td>/contractingProcess/{id}/contract</td>
<td>Contracts of a contracting process to return</td>
</tr>
<tr>
<td>/contractingProcess/{id}/tender</td>
<td>Tender of a contracting process to return</td>
</tr>
<tr>
<td>/award</td>
<td>Gets a list of awards</td>
</tr>
<tr>
<td>/award/{id}</td>
<td>Finds an award by ID</td>
</tr>
<tr>
<td>/award/{id}/amendment</td>
<td>Amendments of an award to return</td>
</tr>
<tr>
<td>/award/{id}/document</td>
<td>Documents of an award to return</td>
</tr>
<tr>
<td>/award/{id}/item</td>
<td>Items of an award to return</td>
</tr>
<tr>
<td>/award/{id}/supplier</td>
<td>Suppliers of an award to return</td>
</tr>
<tr>
<td>/contract</td>
<td>Gets a list of contracts</td>
</tr>
<tr>
<td>/contract/{id}</td>
<td>Finds a contract by ID</td>
</tr>
<tr>
<td>/contract/{id}/amendment</td>
<td>Amendments of a contract to return</td>
</tr>
<tr>
<td>/contract/{id}/buyer</td>
<td>Buyers of a contract to return</td>
</tr>
<tr>
<td>/contract/{id}/document</td>
<td>Documents of a contract to return</td>
</tr>
<tr>
<td>/contract/{id}/item</td>
<td>Items of a contract to return</td>
</tr>
<tr>
<td>/tender</td>
<td>Gets a list of tenders</td>
</tr>
<tr>
<td>/tender/{id}</td>
<td>Finds a tender by ID</td>
</tr>
<tr>
<td>/tender/{id}/contractingProcess</td>
<td>Contracting processes of a tender to return</td>
</tr>
<tr>
<td>/tender/{id}/document</td>
<td>Documents of a tender to return</td>
</tr>
<tr>
<td>/tender/{id}/item</td>
<td>Items of a tender to return</td>
</tr>
<tr>
<td>/organisation</td>
<td>Gets a list of organisations</td>
</tr>
<tr>
<td>/organisation/{id}</td>
<td>Finds an organisation by ID</td>
</tr>
<tr>
<td>/organisation/{id}/award</td>
<td>Awards of an organisation to return</td>
</tr>
<tr>
<td>/organisation/{id}/contractingProcess</td>
<td>Contracting processes of an organisation to return</td>
</tr>
</tbody>
</table>
of SPARQL queries, similarly to what has been proposed in other state of the art tools like BASIL, (Building Apis SImpLy) [28] or GRLC [29]. Beyond exposing URIs for the resources available in the KG, it also allows including authentication and authorisation, pagination, establishing sorting criteria over specific properties, nesting resources, and other typical functionalities normally available in REST APIs. The current implementation only returns JSON objects for the API calls and will be extended in the future to provide additional content negotiation capabilities and formats (JSON-LD, Turtle, HTML), which are common in Linked Data enabled APIs.
There is an online documentation\(^50\), which is continuously updated. It provides the details of the resources provided by our REST API in relation to the OCDS ontology. The core resources derived from the OCDS ontology are: (i) ContractingProcess, (ii) Award, (iii) Contract, (iv) Tender, and (v) Organisation. For all these resources, there is a possibility of pagination (e.g., GET /award?size=5&offset=1), sorting (e.g., GET /contract?sort=-startDate), and filtering (e.g., by the title of the award: GET /award?status=active).
6. Advanced services and tools
We implemented a number of advanced services and tools on top of the platform and KG: namely anomaly detection, cross-lingual document search, and storytelling tools.
6.1. Anomaly detection
Public procurement is particularly susceptible to corruption, which can impede economic development, create inefficiencies, and reduce competitiveness. At the same time, manually analysing a large volume of
\(^50\)https://github.com/TBFY/knowledge-graph-API/wiki
procurement cases for detecting possible frauds is not feasible. In this respect, using ML techniques for identifying patterns and anomalies, such as fraudulent behaviour or monopolies, in procurement processes and networks across data sets produced independently, is highly relevant [30]. For example, by building a network of entities (individuals, companies, governmental institutions, etc.) connected through public procurement events, one can discover exceptional cases as well as large and systematic patterns standing out from the norm, whether they represent examples of good procurement practice or possible cases of corruption.
We applied several ML approaches towards the analysis of public procurement data: unsupervised, supervised, and statistical analysis [31, 32].
Unsupervised learning employed to look for previously undetected patterns in a data set with no pre-existing labels and with a minimum or no of human supervision. Our approach is to group the data into a set of clusters with k-Means method to identify commonalities in the data, and finally detect anomalous data points that do not fit into previous identified clusters. In the first step, every tender is transformed into a feature vector. The conversion retains numerical values, while categorical values are converted into numerical values. Then, in order to increase the ability of feature comparison, feature vectors are normalised. In order to determine optimal number of clusters, k-Means is run 20 times with an incremented number of clusters at every run. For each iteration, a point with value \( x \), the number of clusters, and value \( y \), the gain, are stored. The points are converted into a logarithmic scale, where the first and the last 5 points are taken for two separate linear regressions. The intersection of the two linear curves determines the optimal number of clusters. Once the optimal number of clusters is defined, a k-Means is run on the data set. Vectors deviating most from their centroids (i.e., Cartesian distance) are identified and ordered by the deviation value.
Supervised analysis approach implemented in our platform is based on a decision tree and is used to get additional insights into the public procurement decision-making process. Decision trees belong to class of non-parametric supervised learning algorithms and used for classification and regression. Decision trees are used for visual and explicit decision presentation and decision making and a tree-like model is used. The decision tree algorithm starts by finding the
parameter yielding the highest amount of information and based on that it splits data into subgroups. It then iteratively continues the computation on smaller subgroups, until the data within a subgroup have all the same label. In order to retain a more clear overview, we use a binary decision tree, namely Classification and Regression Trees (CART). For the split criterion we use gini index as it usually performs better for large partitions than entropy. We allow users to select parameters by their own choice (for instance buyer size, bidder municipality, purchase type, number of offers, and the depth of decision tree model). This way, users can compare the importance of various parameter subsets contributing to the success of public tenders.
Finally, statistical approach is used to deal with various ratios between pre-selected parameters. Currently, the ratio between the tender value and the estimated number of employees for a bidder is examined. Bidders are then sorted by their ratio value and every bidder is turned into a point: the \( x \) value is a consecutive number and the \( y \) value is the ration. We developed a visual presentation of interdependence of tender value and the number of employees. As expected, the resulting graph shows deviating behaviour at the beginning of the list, that is big companies that won small tenders, as well as at the end of the list, that is companies with small number of employees that won big tenders. The \( y \) axis is then turned into a logarithmic scale and linear regression is performed. The first and the last 10% points are excluded from linear regression. The linear curve is a measure for a normal behaviour and anomaly is defined as any point that deviates from the linear curve by more than 20%.
We implemented a system (see Figure 5) capable of processing tens of millions of records, based on the techniques mentioned and made it available online\(^3\). The system allows detecting a large class of anomalies in automatic mode or in exploratory mode (with human-machine interaction).
### 6.2. Cross-lingual search
Procurement processes are not only creating structured data, but also constantly creating additional documents (tender specifications, contract clauses, etc.). These are commonly published in the official language of the corresponding public administrations. Only some of these, for instance those published in TED, are multilingual, but the documents in the local language are typically longer and much more detailed than their translations into other languages. A civil servant working at a public administration on a contracting process may be interested in understanding how other public administrations in the same country or in different countries (and with different languages) have worked on similar contexts. Examples may include finding organisations related to a particular procurement process, or search for tenders related to given procurement text.
We worked on an added-value service\(^5\) in order to provide support to these types of users, with the possibility of finding documents that are similar to a given one independently of the language in which it is made available. We also generated a Jupyter notebook with some representative examples, so as to facilitate its use.\(^1\) This service is based on the use of cross-lingual labels created from sets of cognitive synonyms (synsets) and unsupervised probabilistic topic models [33]. The original low-dimensional latent space created by probabilistic topic models is extended with two new languages. In addition to the original English, French and Spanish models, we created Portuguese and Italian models to increase the common space shared by all languages. Topics are described by cross-lingual labels created from the list of concepts retrieved from the Open Multilingual WordNet. Each word is queried to retrieve its synsets. The final set of synsets for a topic is the union of the synsets from the individual top-words of a topic (top3 based on empirical evidences). Documents are then represented as data points and transformed from the original feature space based on mono-lingual topic distributions into a cross-lingual hierarchical-based space, so that similar data points share relevant cross-lingual concepts (see Figure 6). Since topic models create latent themes from word co-occurrence statistics in a corpus, a cross-lingual topic specifies the knowledge about the word-word relations it contains for each language.
The JRC-Acquis data set\(^4\) was used to build the model relating the documents. It is a collection of legislative texts written in 23 languages that were manually classified into subject domains according to the EUROVOC\(^5\) thesaurus. The English, Spanish, French, Italian and Portuguese editions of the
---
\(^1\)http://tbfy.ijs.si
\(^2\)http://tbfy.library.linkeddata.es/search-api
\(^3\)http://bit.ly/tbfy-search-demo
\(^5\)http://eurovoc.europa.eu
Fig. 6. (a) Documents are represented in a unique space that relies on the latent layer of cross-lingual topics obtained by LDA and hash functions through hierarchies of synsets. (b) Theme-aligned topics described by top 5 words based on EUROVOC annotations.
The EUROVOC taxonomy was pre-processed to satisfy the topic independence assumption of probabilistic topic models, by using hierarchical relations. The initial 7,193 concepts from 21 domain areas such as politics, law or economics were reduced to 452 categories, that are independent and can be used to train the topic models. Documents were pre-processed (Part-of-Speech filtering and lemmatized format) by the librAIry NLP service and projected into the previously created topic space.
6.3. Data storytelling
Buyers, suppliers, journalists, and citizens need to be provided with tools that allow them to understand and communicate the complex space of procurement at a high level without going through the complexity of general purpose data visualisation and analytics approaches. Therefore, there is a need for improved methods to create visualisations that communicate findings in an easy to understand way. Existing tools for interactive visualisations and infographics have improved considerably in the last years [34], but the story they are telling is often implicit and difficult to replicate or integrate with other analytics sources. In this respect, storytelling [35] is a viable approach, since it uses facts, statistical results, and data visualisations to convey information on a domain-space, to pro-
---
http://librAIry.linkeddata.es/nlp
Automatic storytelling technology available so far is not only restricted to very narrow domains (e.g., financial news, weather, sports results), but the principles by which stories are generated are not aligned with more general data design frameworks, which in turn focus exclusively on visual components. To this end, we developed a data storytelling tool based on the basic design patterns that govern the organisation of procurement data sets and use these, as well as the features of the data sets and the way they are visualised (e.g., type of data, type of data encoding, number of data points etc.), to configurable, rich story templates, which can be filled in by the end users. The tool is designed as a client-side JavaScript framework that supports authors of data stories. This support includes configurable aspects (depending, among others, on the “shape” of the imported data and using pre-defined templates, and automation (e.g. to construct a particular story, suggest charts etc.). Story authors (such as data journalists, procurement analysts, or public-transparency enthusiasts) are able to import their own data sets (or use openly available ones), perform basic analyses to determine features of interest within the data, and then construct a report or slideshow style data story.
There are currently four major steps of the storytelling tool (see Figure 7):
1. **Import data**: User imports data (in comma-separated-value format) to the tool and browses a high-level overview of it (including headers, data-type, example values, and, where appropriate, max/min values and value distributions). User’s data does not leave their client machine or is stored on external servers.
2. **Analyse data**: The tool detects and highlights to the user features of interest within the data (including trends and correlations), and suggest that the user include these in their story.
3. **Create story**: Users have the facility to create and edit sections of text, charts and images. Users are able to add annotations to charts (to highlight key regions of interest and add further context to visualisations). Tool uses a rule-based templating system to recommend additional sections to the story (based on analysis of data, and on currently included story-sections).
4. **Export story**: The user is able to export the created story to HTML format (multiple formats for dif-
different purposes, e.g., magazine-style, slide-style, etc.) and story format (a JSON-like format for saving/loading stories-in-progress).
The tool provides contextual information describing why features have been recommended, and it allows users to supplement the generated images with text to explicitly highlight regions of interest. Finally, the rule-based narrative flow generation mechanism, using pre-defined structures weighted based on the current state of the data story, recommends and classifies narrative blocks based on Kosara’s Claim-Fact-Conclusion pattern [36]. The tool is released as open source and is available on GitHub [37].
7. Uptake and adoption
The data and platform components pointed out throughout the article are made available openly for the community to contribute and use; a catalogue is available online [38] with pointers to the code repositories, online versions of the artefacts, and relevant documentation (see Figure 8).
The uptake of our platform and KG has been exemplified in four different cases by four different stakeholders so far:
1. The Spanish company OESIA [39] aims at providing better understanding on how public administrations specify and evaluate public tenders, thus delivering the needed insights to improve the efficiency in procurement processes as well as lowering barriers to companies, mainly SMEs, the access to public tenders that will lead to a better internationalisation, and SMEs’ participation share on those tenders across Europe.
2. The city of Zaragoza, Spain, aims to respond to the needs of citizens, with new services for viewing economic information and contracting, favouring the understanding and knowledge of the data; reusers, generating their own services and developing an API based on common criteria to facilitate interoperability; and the institutions, working on tools to achieve a more transparent and efficient management of contracting processes.
3. The Italian company CERVED [40] aims to enable easier supplier selection by combining enriched company data with procurement contracts data with focus on the Italian market. It aims at addressing three main customer needs/problems: supplier analysis in terms of risk business information (default) and procurement risk (colluding); easing and speeding up the procurement decision process by supporting supplier scores and ranking; and, dealing with scarce offers/bidders through scouting for new suppliers.
4. Ministry of Public Administration in Slovenia aims to spot potentially unwanted behaviour in public procurement. These findings could then be used to adjust legislation to curtail these unwanted actions or to direct the focus of the regulatory bodies to the discovered cases. The aim is to make public procurement even more transparent and generate new confidence, which would result in more offers being made in Slovenian public procurement and lead to rise in competition and better value for the taxpayers.
The knowledge graph API, KG, and the storytelling tool are being used by the OESIA and by the city of Zaragoza. OESIA created a commercial tool for tender analysis, which is offered to SMEs. Zaragoza includes economic information in their transparency portal [41], including public procurement. Regarding advanced tools, the anomaly detection tool is being used by the Ministry of Public Administration in Slovenia for detecting procurement anomalies, while the cross-lingual similarity search is being used by CERVED for finding tenders in other countries/languages and offering this as part of their services. The categories of users using the system include civil servants (i.e., Zaragoza and Slovenia), citizens (i.e., Zaragoza), and companies, especially SMEs (i.e., CERVED and OESIA). As of October 2020, over 4,000 queries have been submitted to the system APIs.
Our ontology network is proposed as the way to publish open data about procurement by governments. An example is the case of Zaragoza, which already adopted our ontology network. We plan to maintain the KG in the context of already funded innovation projects. Maintenance will include ingesting new data and operating the system. Agreements with data providers, i.e., OpenOpps and OpenCorporates, have
[37] https://github.com/tbfy/storytelling
[38] https://tbfy.github.io/platform
[40] https://company.cerved.com
[41] https://zaragoza.es/sede/servicio/transparencia
been established to provide the KG with data on a continuous basis.
8. Evaluation
We conducted a set of evaluations focusing on platform components and advances services and tools. The results are presented in what follows.
8.1. Platform and KG
We justify the practical value of the platform and KG in terms of (i) the extent to which the KG is able to meet the key information needs, (ii) the computational requirements and characteristics of the data ingestion process, (iii) and the data access performance through the knowledge graph API.
Regarding the KG and its ability to meet the key information requirements, the development of the ontologies underlying the KG followed established ontology development processes and the information needs specified earlier in their respective development processes are met [21, 26]. A key indicator is the stakeholders adopting the KG and platform components based on KG. These include two public and two non-public organisations as described in Section 7. In Figure 9, an example query, representing the value of the ontology network and the integrated data sets, is shown. The query gathers a list of organisations, their number of employees and activity categories, and the award amounts and currencies for the awards with which these companies are associated as suppliers. This query essentially brings together two key information pieces, that is the award amount and the number of employees from the procurement data set represented through the OCDS ontology and company data set represented through the euBusinessGraph ontology respectively. Such queries enable advanced analytics, such as anomaly detection approaches (e.g., statistical approach) presented in Section 6, over originally disconnected data sets.
Regarding the data ingestion process, we are running the ingestion pipeline on a powerful server, with the following hardware specifications: 2x Xeon Gold 6126 (12 Cores, 2.4 GHz, HT) CPU, 512 GB main memory, 1x NVIDIA Tesla K40c GPU, and 15 TB HDD RAID10 & 800 GB SSD storage. We use a workload manager system to schedule daily ingestion jobs with 1 core and 16 GB of memory allocated. On aver-
Fig. 9. A SPARQL query spanning over the integrated procurement and company data sets.
```sparql
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX ocds: <http://data.tbey.eu/ontology/ocds#>
PREFIX ebg: <http://data.businessgraph.io/ontology#>
PREFIX regOrg: <http://www.w3.org/ns/regorg#>
?award rdf:type ocds:Award.
?award ocds:hasAwardValue ?awardValue.
?supplier ocds:isSupplierFor ?award.
?supplier owl:sameAs ?org.
} LIMIT 1000
```
For the Slovenian data, 2500 OCDS releases are processed and 2400 suppliers (i.e., companies) are looked up per day. The average daily performance for each data ingestion step is given below:
- **Step 1** (Download procurement data): around 1 minute per day.
- **Step 2** (Reconcile supplier data): less than 20 minutes per day.
- **Step 3** (Enrich JSON data): around 1 minute per day.
- **Step 4** (Convert JSON to XML): less than 1 minute per day.
- **Step 5** (Map XML data to RDF): less than 1 hour per day.
- **Step 6** (Publish RDF to database): around 2 minutes per day.
The daily data ingestion process, as the figures provided suggest, could be executed a couple of times during the day or as one large batch process overnight without any computational problems.
Regarding the data access performance, we are running the knowledge graph storage and API services in Docker containers on an Amazon EC2 m5.2xlarge virtual machine with 8 vCPU and 32GB memory. The SPARQL query in Figure 9 takes around 0.58 seconds to complete. We list performance figures for a few example knowledge graph API calls invoked by a web client over the Internet (each executed 10 times and averaged):
- API calls of type “get a contracting process by id”, “get a tender by id”, “get an award by id”, and “get an organisation by id”: around 0.15 seconds,
- API calls of type “get 1000 contracting processes”, “get 1000 tenders”, “get 1000 awards”, “get 1000 organisations”: around 0.30 seconds,
- API calls of type “get 1000 contracting processes and associated awards”, “get 1000 contracting process and associated tenders”, “get 1000 contracting process and associated contracts”: around 0.40 seconds.
The API calls are answered in reasonable time intervals offering a feasible data access layer for end-user applications built on top.
8.2. Anomaly detection
We did an analysis of the Slovenian data together with the experts from Ministry of Public Administration in Slovenia using our anomaly detection system over the Slovenian procurement data in the KG. We report three example cases in order to provide an empirical evidence for the usefulness of the proposed anomaly detection solution.
---
62 https://aws.amazon.com/ec2/instance-types/m5/
Case 1: The unsupervised learning approach is designed to identify tenders with highest deviations from the “baseline” (see Figure 10 (a)). For example, our method identified a public procurement with tender value of 92 million EUR, which was won by the bidder named Telekom of Slovenia. It is the biggest telecommunication provider in Slovenia. The buyer is a public company, namely DARS d.d. Motorway Company in the Republic of Slovenia. A quick look into the public spending data Erar63 shows that DARS Motorway Company in the Republic of Slovenia payed Telekom of Slovenia around 34.1 million EUR during October 2014, but there was one very large transaction in April 2018, 26.2 million EUR, while other transactions were much smaller. This example shows that users can quickly spot deviations and find hints on data that sticks out and are worth of more in-depth scrutiny.
Case 2: By using a decision tree (see Figure 10 (b)), we identified criteria for successful tenders. We currently define a successful tender as a tender that received more than one bid (i.e., meaning that there is a competition, so we assume tender is successful). The success definition could be redefined by the decision makers. Our analysis showed that a tender is successful, if the public institution who opened the tender is small (less than 1375 employees) and if the bidding is done in group. In that case, the chance of having more than one provider is 70%. This analysis therefore shows that small buyers attract more competition than big ones, especially if they allow bidding in group. This can give decision makers a clear signal, that is small public institutions should be motivated to bid in groups.
Case 3: We developed a visual presentation for the interdependence of tender value and the number of employees of bidders (see Figure 10 (c)) using statistical analysis. On upper left corner of the graph, we see big companies that won small tenders, while on a bottom right corner, there are companies with small number of employees that won big tenders. Based on this statistical analysis, we selected one bidder, who stands out on a positive deviations side. The bidder is a company registered for wholesale of pharmaceutical goods and medical supplies. It does not have any employees and a webpage, but won a tender of a value 9 million EUR. After checking the history of this bidder’s business with the Slovenian public sector, we found some interesting transactions from Slovenia Forest Service to this company. This gives a hint for some further manual investigation.
The findings presented here do not necessarily mean an illegal situation; however, they provide pointers for further investigation. The cases presented demonstrate that the system developed for anomaly detection could be useful for finding interesting patterns in large procurement data sets.
8.3. Cross-lingual search
The evaluation of cross-lingual document similarity through unsupervised probabilistic topic models described in [33] was extended to handle Portuguese and Italian texts, in addition to those already handled in English, Spanish and French. The method was evaluated in a document retrieval task by using a set of documents previously tagged with categories.
The JRC-Acquis corpus [37] was used to create the cross-linguistic topic models. More than 81k texts tagged into subject domains according to the EUROVOC thesaurus [38] were included in the training-test package for each language-specific model. It is publicly available for reuse.
A pre-processing of the documents was required to clean texts and to build a suitable data set for the model. Terms with high frequency were assumed not specific to a particular topic, so words present in more than 90% of the corpus were considered stop-words and removed from the model. Also, rare terms that occur infrequently were considered not representative of a single topic since they did not appear enough to infer that it is salient for a topic. Thus, words present in less than 0.5% of the corpus were also removed from the model. Lemmatized expressions of names, verbs and adjectives were used to create the bag-of-words, and documents with less than 100 characters were discarded since LDA has proven to has lower performance with these type of texts [39].
We followed some steps [33] to set the number of topics $K = 500$ and run the Gibbs samplers for 1000 training iterations on LDA. The Dirichlet priors $\alpha = 0.1$ and $\beta = 0.01$ were set to create the word distributions for each topic. The list of synsets related with the top5 words for each topic were identified and a 3-level hierarchy of topics per document was replaced by a 3-level hierarchy of synsets. Probabilistic topic models
---
63https://erar.si
65http://eurovoc.europa.eu
66http://library.linkeddata.es/data/jrc/select?q=*:*
in Italian⁶⁷ and Portuguese⁶⁸ were created and added to the list of available models to infer document relations (i.e., Spanish, English and French). They were trained independently without previously establishing any type of alignment between their topics.
We also used a supervised version of LDA to force the correspondence between the categories identified in the EUROVOC thesaurus and the latent topics of the model. Theme-aligned probabilistic topic models were created in Italian⁶⁹ and Portuguese⁷⁰. They share the topics but not its definitions (i.e., vocabulary).
---
⁶⁷http://librairy.linkeddata.es/jrc-it-model-unsupervised
⁶⁸http://librairy.linkeddata.es/jrc-pt-model-unsupervised
⁶⁹http://librairy.linkeddata.es/jrc-it-model
⁷⁰http://librairy.linkeddata.es/jrc-pt-model
A simple way of looking at the output quality of the topic models is by simply inspecting top words associated with a particular topic learned during training. Samples of cross-lingual topics are provided in Figure 6. We may consider this visual inspection of the top words associated with each topic as an initial qualitative evaluation, suitable for human judges.
A collection of 1k randomly selected documents (monolingual, bi-lingual and multi-lingual) were annotated by the category-based and synset-based topic alignment algorithms. Then, we randomly took articles to search for documents that share the same categories than the query document (i.e., the ground-truth set). Next, the query text was used to search for similar documents using category-based annotations and synset-based annotations. We evaluated the performance of the algorithms in terms of precision@3, precision@5 and precision@10.
Results, see Table 2, were quite promising across languages with a performance close to the supervised approach in terms of accuracy, although a better performance is achieved with English texts (as expected from the quality of the tools in those languages). This makes us think that the process of topic annotation by set of synonyms should be improved to filter those elements that are not sufficiently representative. Our future lines of work will go in that direction, incorporating context information to identify the most representative synset for each topic.
### 8.4. Data storytelling
To evaluate the tool, a targeted user-study was carried out. Participants were chosen as people who are familiar with creating data stories and work with data in a professional capacity. Five participants were involved in the study. This work was approved by the University of Southampton’s ethics board.
The evaluation process took the form of a contextual inquiry. Each participant was provided with a tutorial video explaining how the tool functioned and a sample data set. They were then asked to use the tool, and the provided data, to plan out a draft of a data story (with some guidance on aspects of the data that they should investigate) and were asked to talk through their actions and intentions as they progressed. Finally, they were asked to take part in a semi-structured interview exploring the full process.
During the data upload process, the primary observation made by participants was that they would, under normal circumstances, analyse the data in a standalone data analysis tool (for example Microsoft Excel or Google Sheets) to get a full overview of the data and, failing that, more ability to view and manipulate the raw data (for example, sorting by columns) would be required. In addition, the “dependencies” field (since renamed “Relationships”) often proved confusing to participants; a more thorough explanation (either in-tool or as part of the supplementary material) may be required to address this. However, once the concept (that this allowed them to examine suspected relationships in the data) was explained, they quickly understood and could utilise the feature. As such, there are two primary approaches that could be taken to address the observations described above; firstly, a small number of UI changes (for example, the ability to sort or filter columns) could be used to introduce (modestly) extended data analysis capability. Secondly, by simplifying the initial data interface and reframing the
---
**Table 2**
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>p@3 mean</td>
<td>0.84</td>
<td>0.83</td>
<td>0.79</td>
<td>0.78</td>
<td>0.82</td>
<td>0.81</td>
</tr>
<tr>
<td>p@3 dev</td>
<td>0.26</td>
<td>0.26</td>
<td>0.27</td>
<td>0.29</td>
<td>0.23</td>
<td>0.29</td>
</tr>
<tr>
<td>p@5 mean</td>
<td>0.82</td>
<td>0.80</td>
<td>0.77</td>
<td>0.75</td>
<td>0.80</td>
<td>0.78</td>
</tr>
<tr>
<td>p@5 dev</td>
<td>0.25</td>
<td>0.25</td>
<td>0.25</td>
<td>0.27</td>
<td>0.23</td>
<td>0.25</td>
</tr>
<tr>
<td>p@10 mean</td>
<td>0.77</td>
<td>0.76</td>
<td>0.72</td>
<td>0.71</td>
<td>0.77</td>
<td>0.75</td>
</tr>
<tr>
<td>p@10 dev</td>
<td>0.23</td>
<td>0.25</td>
<td>0.25</td>
<td>0.27</td>
<td>0.22</td>
<td>0.21</td>
</tr>
</tbody>
</table>
---
[ERGO ID 53399](#)
tool (and the supplementary material, such as documentation and tutorials) to emphasise the supportive nature of the tool and the fact that it should not be a replacement to more powerful data analysis packages, and to highlight that the tool’s primary purpose is (and/or, should be) narrative first and foremost, user expectations can be managed.
Participants frequently mentioned the tool’s ability to highlight possible features of interest within the dataset as being a highly useful feature for their workflow. The key challenge participants faced with using this part of the tool was that, particularly with scatterplots and outliers, it is difficult to determine which row of the dataset each point represents in the preview of the chart. As the tool is unable (due to the limitations of author-known context) to establish which is the key field being represented by the rows of the dataset (e.g. “country”), this would need to be manually determined by the user (possibly at the data upload stage) before relevant tooltips could be displayed to guide the process. Similar requests were made to enhance this part of the process to fully leverage the tool’s strengths, such as by visually highlighting the detected features (e.g. the trend, clusters, or outliers). One participant noted that the default values used by the tool to determine strength-of-correlation are subjective; depending on the domain, a correlation of (e.g.) \( r = 0.4 \) could be significant. Similarly, another participant observed that it can often be the lack of correlation (rather than the presence) that can indicate a particularly interesting story within the data.
The narrative structure recommendations were a divisive element of the tool; some participants found that the extra structure it afforded was valuable (particularly in the context of semi-automated news and block-based reporting), while others felt that it may be too prescriptive and did not match their existing workflow. One key observation made that shed some light on this view was that the current narrative structure (based on the claim-fact-conclusion) structure is highly “academic” and, while suitable for creating formal reports, does not map to all domains (such as journalistic articles). As such, if the recommender supported other narrative structures such as the proverbial “inverted pyramid” (possibly chosen at story-creation time by the user), the tool would have wider reach and greater benefit across different domains.
The story export feature, perhaps surprisingly, proved to be the feature most difficult to integrate with the current data-story author workflow. This is due to the fact that many of the participants publishing work-flows currently require proprietary in-house (or otherwise integration with third-party) tools. While there is little that can be done to shift industry workflows, some of these issues could be mitigated in future by providing non-web-based output formats (such as .odt or .pdf) and/or by developing closer integration with third-party tools (such as Wordpress). Another concern raised by participants was the need for the ability to provide simple formatting of the created stories. At present, the tool deliberately has minimal styling capability, such that this can be handled by the authors’ own house style (for example, their existing CSS templates). However, limited formatting (for example, in the form of section headers, semantic markup, and/or markdown) could be included in future iterations to support this further.
Overall, the tool provides a serviceable platform for developing data stories (particularly when used in concert with other tools) that can freely be iterated upon, developed, and extended by different communities that require domain specific functionalities. While the tool cannot replace the full storytelling pipeline, it does not set out to do so, and instead succeeds in supporting authors by providing an ability to highlight key features and recommend story structures.
9. Discussion
There are plenty of lessons learned in the context of this work, which may be applicable to the construction of other KGs in similar or different domains. In what follows, we discuss these from two main perspectives: Semantic Web and data quality.
9.1. Semantic technologies
We used Semantic Web technologies to integrate disparate open data sources in a standardised way. We list a set of particular observations below:
(i) Semantic technologies enabled ingesting disparate data sources, integrating relevant data sets (e.g., company and procurement data), and publishing data in a uniform way by using existing tools and good practices. However, getting and pre-processing the data (e.g., mapping and curation) was a major time-consuming task.
(ii) The KG enabled easier and advanced analytics, which was otherwise not possible, by connecting suppliers appearing in the procurement data to
companies in company data. However, the lack of identifiers or identifying information for key entities, such as companies, in original data sources hampers the reconciliation process.
(iii) The chosen Semantic Web technologies and tools scaled well for ingesting and provisioning large amounts of data and RESTful approach was useful for bringing the Linked Data to non-Semantic Web application developers. However, more support is required such as visual editors for specifying mapping and data transformations.
(iv) The process of building a high-quality KG that can be used extensively by users would be clearly improved if all data sources were providing their procurement data in a more structured manner. There are still many documents provided as PDFs (even scanned PDFs), hindering techniques like the ones described for cross-lingual search.
(v) Data quality, as described below, and the lack of data (currently, due to many types of regulations across countries, not all contracting processes, especially the smallest ones, are published) are still a relevant issue and reduce the result quality of ML processes such as anomaly detection, cross-lingual search, and reconciliation.
The main take away here is that data publishers should publish their data through standard vocabularies, ontologies, and APIs. This would, in the first place, save data consumers from ad-hoc data collection and integration efforts (e.g., Web scraping, data curation, reconciliation) and resources freed could be redeployed for value creation activities. Similar solutions could be provided using other technologies; however, without following the Linked Data and Semantic Web principles, they would rather remain ad-hoc, require major restructuring efforts with each new data set, and could not be easily scaled, given various independent data publishers and consumers.
9.2. Data quality
We faced a relatively large number of data quality issues, even though there are mandates in place for buyers to provide correct data. This particularly applies to procurement data sources. These data quality issues could be classified as:
(i) **Missing data:** It is frequent that data is missing. Among others, the least frequently completed field in the tender and contracting data is the value field; it is usually completed in less than 10% of tender notices. One item of data that is particularly important to procurement transparency is the reference data required to link a contract award to a tender notice (very common in the TED data). We found that just 9% of award notices had provided a clear link between tenders and contracts. Subsequently, the majority of contract award notices were orphaned and there was no link to the source tenders.
(ii) **Duplicate data:** Publishers frequently publish to multiple sources in order to meet the legal requirements of their host country and that of the European Union. This means that all over-threshold tenders are available at least twice. The task of managing duplicates is not always simple. It is common for different publishing platforms to have different data schemas and interoperability between schemas is not guaranteed.
(iii) **Poorly formed data:** Sources are frequently providing malformed data or data that cannot be reasonably parsed by code. The tender and contract value field can often include string values rather than numbers (same goes for the dates). Across the sources, the approach to using character delimiters in value data is frequently heterogeneous, with different nationalities using different delimiters to separate numbers and to indicate decimals.
(iv) **Erroneous data:** Structured data such as numeric and date records are frequently a problem. Buyers often submit zero value entries in order to comply with the mandate and the lack of validation on date related data has allowed buyers to record inconsistent date data. There are some contracts where the date of publication exceeds the end date of the contract or the start date of the contract is greater than the end date of the contract.
(v) **Absent data fields:** In some cases, the sources lack core pieces of information, for instance, there is no value field in a number of European sources. A large number of sites also fail to publish the currency of their monetary values. In all cases, if a publisher sought to add the additional information, such as a different currency, there would be no capacity in the system to provide the information required in a structured form.
Most of these problems could be resolved through the use of standards and validation at the point of data entry. Requiring buyers to publish records to a stan-
dard would, in turn, require the platform providers to both mandate the field format and validate data entries. The usage of an ontology network for the development of the KG allowed us to inform public administrations willing to provide data on the minimum set of data items that are needed, and some of them are already adapting their information systems for this purpose [7].
10. Conclusions
In this article, we presented an open linked data platform for constructing and publishing a KG for public procurement data through an ontology network and a set of APIs. We also presented a set of advanced services and tools for using and analysing these data including anomaly detection, cross-lingual search, and data storytelling. We provided evidence for adoption and a series of evaluations from various dimensions showing that a KG based approach based on Semantic Web and Linked Data principles and technologies is a viable solution for integrating and analysing large and disparate data sources. We released all the software components and data sets (both original and transformed) openly for the use of public.
The future work includes firstly integrating new related data sets, such as spending data (i.e., transactions) [40], for extracting more complicated insights through ML. Secondly, the use of non-ML techniques, such as crowd-sourcing [41], is to be explored in order to improve the data quality and data linking efforts. Finally, developing high level visual tools (e.g., [42]) for data mappings and transformations to aid the data integration process would be essential.
Acknowledgement
The work presented in this article was funded by the EC H2020 project TheyBuyForYou (grant 780247).
References
|
{"Source-Url": "http://www.semantic-web-journal.net/system/files/swj2618.pdf", "len_cl100k_base": 15904, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 80230, "total-output-tokens": 20233, "length": "2e13", "weborganizer": {"__label__adult": 0.0009164810180664062, "__label__art_design": 0.0029277801513671875, "__label__crime_law": 0.0135345458984375, "__label__education_jobs": 0.0208892822265625, "__label__entertainment": 0.00041604042053222656, "__label__fashion_beauty": 0.0005178451538085938, "__label__finance_business": 0.17626953125, "__label__food_dining": 0.0008296966552734375, "__label__games": 0.0019197463989257812, "__label__hardware": 0.0014181137084960938, "__label__health": 0.0013723373413085938, "__label__history": 0.00263214111328125, "__label__home_hobbies": 0.0006680488586425781, "__label__industrial": 0.00589752197265625, "__label__literature": 0.001312255859375, "__label__politics": 0.00740814208984375, "__label__religion": 0.0009312629699707032, "__label__science_tech": 0.23486328125, "__label__social_life": 0.0007295608520507812, "__label__software": 0.193603515625, "__label__software_dev": 0.327392578125, "__label__sports_fitness": 0.0004127025604248047, "__label__transportation": 0.0025920867919921875, "__label__travel": 0.0007495880126953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 82880, 0.04437]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 82880, 0.25375]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 82880, 0.89853]], "google_gemma-3-12b-it_contains_pii": [[0, 2537, false], [2537, 7126, null], [7126, 11128, null], [11128, 15652, null], [15652, 20681, null], [20681, 21537, null], [21537, 23363, null], [23363, 25709, null], [25709, 29016, null], [29016, 33087, null], [33087, 35646, null], [35646, 40681, null], [40681, 42300, null], [42300, 44693, null], [44693, 49131, null], [49131, 51295, null], [51295, 54353, null], [54353, 59268, null], [59268, 60054, null], [60054, 64334, null], [64334, 69265, null], [69265, 73945, null], [73945, 79330, null], [79330, 82880, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2537, true], [2537, 7126, null], [7126, 11128, null], [11128, 15652, null], [15652, 20681, null], [20681, 21537, null], [21537, 23363, null], [23363, 25709, null], [25709, 29016, null], [29016, 33087, null], [33087, 35646, null], [35646, 40681, null], [40681, 42300, null], [42300, 44693, null], [44693, 49131, null], [49131, 51295, null], [51295, 54353, null], [54353, 59268, null], [59268, 60054, null], [60054, 64334, null], [64334, 69265, null], [69265, 73945, null], [73945, 79330, null], [79330, 82880, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 82880, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 82880, null]], "pdf_page_numbers": [[0, 2537, 1], [2537, 7126, 2], [7126, 11128, 3], [11128, 15652, 4], [15652, 20681, 5], [20681, 21537, 6], [21537, 23363, 7], [23363, 25709, 8], [25709, 29016, 9], [29016, 33087, 10], [33087, 35646, 11], [35646, 40681, 12], [40681, 42300, 13], [42300, 44693, 14], [44693, 49131, 15], [49131, 51295, 16], [51295, 54353, 17], [54353, 59268, 18], [59268, 60054, 19], [60054, 64334, 20], [64334, 69265, 21], [69265, 73945, 22], [73945, 79330, 23], [79330, 82880, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 82880, 0.0992]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f09b4d014e64d4364d304cfa2ca8fb8b2c4cd2b0
|
On the Reaction to Deprecation of 25,357 Clients of 4+1 Popular Java APIs
Sawant, Anand; Robbes, Romain; Bacchelli, Alberto
DOI
10.1109/ICSME.2016.64
Publication date
2016
Document Version
Accepted author manuscript
Published in
Proceedings - 2016 IEEE International Conference on Software Maintenance and Evolution, ICSME 2016
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable).
Please check the document version above.
On the reaction to deprecation of 25,357 clients of 4+1 popular Java APIs
Anand Ashok Sawant
Delft University of Technology
Delft, The Netherlands
A.A.Sawant@tudelft.nl
Romain Robbes
PLEIAD @ DCC
University of Chile, Chile
rrobbes@dcc.uchile.cl
Alberto Bacchelli
Delft University of Technology
Delft, The Netherlands
A.Bacchelli@tudelft.nl
Abstract—Application Programming Interfaces (APIs) are a tremendous resource—that is, when they are stable. Several studies have shown that this is unfortunately not the case. Of those, a large-scale study of API changes in the Pharo Smalltalk ecosystem documented several findings about API deprecations and their impact on API clients.
We conduct a partial replication of this study, considering more than 25,000 clients of five popular Java APIs on GitHub. This work addresses several shortcomings of the previous study, namely: a study of several distinct API clients in a popular, statically-typed language, with more accurate version information. We compare and contrast our findings with the previous study and highlight new ones, particularly on the API client update practices and the startling similarities between reaction behavior in Smalltalk and Java.
1. INTRODUCTION
An Application Programming Interface (API) is a definition of functionalities provided by a library or framework that is made available to an application developer. APIs promote the reuse of existing software systems [1]. In his landmark essay “No Silver Bullet” [2], Brooks argued that reuse of existing software was one of the most promising attacks on the essence of the complexity of programming: “The most radical possible solution for constructing software is not to construct it at all.”
Revisiting the essay three decades later [3], Brooks found that indeed, reuse continues to be the most promising attack on essential complexity. APIs enable this. To cite a single example, we found at least 15,000 users of the Spring API.
However, reuse comes with the cost of dependency on other components. This is not an issue when said components are stable. But evidence shows that APIs are not always stable: The Java standard API for instance has an extensive deprecated API. API developers often deprecate features and replace them with new ones, and over time remove these deprecated features. These changes can break the client’s code. Studies such as Dig and Johnson’s [4] found that API changes breaking client code are common.
The usage of a deprecated feature can be potentially harmful. Features may be marked as deprecated because they are not thread safe, there is a security flaw, or will be replaced by a superior feature. The inherent danger of using a feature that has been marked as obsolete is good enough motivation for developers to transition to the replacement feature as suggested by the API developers themselves.
Besides the above dangers of using deprecated features, they also lead to reduced code quality, and therefore to increased maintenance costs. With deprecation being a maintenance issue, we would like to see if API clients actually react to deprecated features of an API.
To our knowledge, Robbes et al. conducted the largest study of the impact of deprecation on API clients [5], investigating deprecated methods in the Squeak and Pharo software ecosystems. This study mined more than 2,600 Smalltalk projects hosted on the SqueakSource platform. Based on the information gathered they looked at whether the popularity of deprecated methods either increased, decreased or remained as is after their deprecation.
The Smalltalk study found that API changes caused by deprecation can have a major impact on the ecosystem. However, a small percentage of the projects actually reacts to an API deprecation. Of the projects that do react, most of them systematically replace the calls to deprecated features with those that are recommended by API developers. Surprisingly, this was done despite the fact that API developers in Smalltalk do not appear to be documenting their changes as well as can be expected.
The main limitation of this study is being focused on a niche programming community i.e., Pharo. This resulted in a small dataset with information from only 2,600 projects in the entire ecosystem. Additionally, with Smalltalk being a dynamically typed language, the authors had to rely on heuristics to identify the reaction to deprecated API features.
We conduct a non-exact replication [6] of the previous Smalltalk [5] study, also striving to overcome its limitations. We study the reactions of more than 25,000 clients of 5 different APIs, using the statically-typed Java language; we also collect accurate API version information.
Our results confirm that only a small fraction of clients react to deprecation, also in the Java ecosystem. Out of those, systematic reactions are rare and most clients prefer to delete the call made to the deprecated entity as opposed to replacing it with the suggested alternative one. This happens despite the carefully crafted documentation accompanying most deprecated entities.
II. METHODOLOGY
We define the research questions and describe our research method contrasting it with the study we partially replicate [5].
A. Research Questions
In line with our partial replication target, we try to keep as much as possible the same research questions as the original work. Given our additional information, we add one novel research questions (RQ0) and alter the order and partially the methodology we use to answer the research questions; this leads to some differences in the formulation. The research questions we investigate are:
RQ0: What API versions do clients use?
RQ1: How does API method deprecation affect clients?
RQ2: What is the scale of reaction in affected clients?
RQ3: What proportion of deprecations does affect clients?
RQ4: What is the time-frame of reaction in affected clients?
RQ5: Do affected clients react similarly?
B. Research Method, Contrasted With Previous Study
Robbes et al. analyzed projects hosted on the SqueakSource platform, that used the Monticello versioning system. The dataset contained 7 years of evolution of more than 2,600 systems, which collectively had over 3,000 contributors. They identified 577 deprecated methods and 186 deprecated classes in this dataset. If its results were very informative, this previous study had several shortcomings that this follow-up study addresses. We describe the methodology for collecting the data for this study by describing it at increasingly finer granularity: Starting from the selection of the subject systems, to detecting the use of versions, methods, and deprecations.
Subject systems. The original study was based on a rather specific dataset, the Squeak and Pharo ecosystems found on SqueakSource. Due to this, the set of systems that were investigated in the previous study was relatively small. To overcome this limitation, we focus on a mainstream ecosystem: Java projects hosted on the social coding platform GitHub. Java is the most popular programming language according to various rankings [7], [8] and GitHub is the most popular and largest hosting service [9]. Our criteria for selection included popularity, reliability, and variety: We measure popularity in terms of number of clients in GitHub, length of history, and overall reputation (e.g., in Stack Overflow); we ensure reliability by picking APIs that are regularly developed and maintained; and we select APIs pertaining to different domains. These criteria ensure that the APIs result in a representative evolution history, do not introduce confounding factors due to poor management, and do not limit the types of clients that use them.
We limit our study to Java projects that use the Maven build system, because Maven based projects use Project Object Model (POM) files to specify and manage the API dependencies that the project refers to. We searched for POM files in the master branch of Java projects and found approximately 42,000 Maven based ones on GitHub. By parsing their POM files, we were able to obtain all the APIs that they depend on. We then created a ranking of the most popular APIs, which we used to guide our choice of APIs to investigate.
This selection step results in the choice of 5 APIs hosted on GitHub: namely: EasyMock [10], Guava [11], Guice [12], Hibernate [13], and Spring [14]. The first 6 columns of Table I provide additional information on these APIs. Subsequently, we select the main subjects of this study: The clients of APIs introducing deprecated methods. Using the aforementioned analysis of the POM files, we have the list of all possible clients. We refine it using the GHTorrent dataset [15], to select only active projects. We also remove clients that had not been actively maintained in the 6 months preceding our data collection, to eliminate ‘dead’ or ‘stagnating’ projects. We totaled 25,357 projects that refer to one or more of 5 aforementioned popular APIs. The seventh column in Table I provides an overview of the clients selected, by API.
API version usage. Explicit library dependencies are rarely mentioned in Smalltalk and there are several ways to specify them, often programmatically and not declaratively; also, Smalltalk does not use import statements as Java does. Thus, it is hard to detect dependencies between projects (heuristics are needed [16]) and to analyze the impact of deprecated methods on client. In contrast, Maven projects specify their dependencies explicitly and declaratively: We can determine the API version a project depends on, hence answer more questions, such as if projects freeze or upgrade their dependencies. In particular, we only consider projects that encode specific versions of APIs, or unspecified versions (which are resolved to the latest API version at that date). We do not consider ranges of versions, however very few projects use those (84 for all 5 APIs, while we include 25,357 API dependencies to these 5 APIs). In addition, few projects use unspecified API versions (269 of the 25,357, which we do include).
Fine-grained method/annotation usage. Due to the lack of explicit type information in Smalltalk, there is no way of actually knowing if a specific class is referenced and whether the method invocation found is actually from that referenced class. This does not present an issue when it comes to method invocations on methods that have unique names in the ecosystem. However, in the case of methods that have common names such as toString or name or item, this can lead to some imprecise results. In the previous study, Robbes et al. resorted to manual analysis of the reactions to an API change, but had to discard cases which were too noisy. In this study, Java’s static type system addresses this issue without the need for a tedious, and conservative manual analysis. On the other hand, Java APIs can be used in various manners. In Guava, actual method invocations are made on object instances of the Guava API classes, as one would expect. However in Guice, clients use annotations to invoke API functionality, resulting in a radically different interaction model. These API usage variabilities must be considered.
2We do not mine the JDK itself, because to identify the JDK version required by a client one needs to rely on the client using the Maven compiler plugin. Yet, this plugin is rarely used, since it is mainly used to specify a JDK version other than the default one used by the client.
In this section we answer the research questions we detailed in Section II-A. Figure 1 exemplifies the behavior of an API and its clients, when possible we refer to it to explain the methodology behind the answer to each research question.
RQ0: What API versions do clients use?
Our first research question seeks to investigate popularity of API versions and to understand the version change behavior of the clients. This sets the ground for the following answers.
We start considering all the available versions of each API and measure the popularity in terms of how many clients were actually using it at the time of our data collection. In the example in Figure 1, we would count popularity as 1 for v7, 2 for v6, and 1 for v4. The column 'number of clients' in Table I specifies the absolute number of clients per each API and Figure 2 reports the version popularity results, by API.
The general trend shows that a large number of different versions of the APIs are used and the existence of a significant fragmentation between the versions (especially in the case of Hibernate, where the top three versions are used by less than 25% of the clients). Further, the general trend is that older versions of the APIs are more popular.
This initial results hint at the fact that clients have, to say the least, a delayed upgrading behavior, which could be related with how they deal with maintenance and deprecated methods. For this reason, we analyze whether the clients ever updated their dependencies or if they “froze” their dependencies—that is, if they never updated their API version. In the example in Figure 1, we count three clients who upgraded version in their history. If projects update we measure how long they took to do so (time between the release of the new version of the API in Maven central, and when the project’s POM file is updated).
Table II summarizes the results. The vast majority of the clients we consider freeze to one single version of the API they use. Further, we see that this holds for all the APIs, except for Spring, whose clients have at least one update in 74% of the cases. In terms of time to update, interestingly, the median is lower for clients of APIs that have more clients that update, such as Hibernate and Spring. In general, update time varies considerably—we will come back to this in RQ3.
**RQ1: How does API method deprecation affect clients?**
In RQ0 we showed that most clients do not adopt new API versions. We now focus on the clients that use deprecated methods and on whether and how they react to deprecation.
**Affected by deprecation.** From the data, we classify clients into 4 categories, which we describe referring to Figure 1:
- **Unaffected:** These clients never use a deprecated method.
- **Potentially affected:** These clients do not use any deprecated method, but should they upgrade their version, they would be affected. Client 1 in Figure 1 belongs to this category.
- **Affected:** These clients use a method when it was declared as deprecated, but do not change the API version throughout their history, as it happens in the case of Client 2.
- **Affected and changing version:** These clients use at least one method declared as deprecated and also update their API version. Clients 3, 4, and 5 belong to this category.
Figure 3 reports the breakdown of the clients in the four categories. The clearest pattern is that the vast majority of clients, across all APIs, *never use any deprecated method* throughout their entire history. This is particularly surprising in the case of Hibernate, as it deprecated most of its methods (we will discuss this in RQ3). Clients affected by deprecation vary from more than 20% for Easymock and Guava, to less than 10% for Hibernate, and barely any for Spring. Of these, less than one third also change their API version, thus highlighting a very static behavior of clients with respect to API usage, despite our selection of active projects.
**Common reactions to deprecation.** We investigate how ‘Affected and changing version’ clients deal with deprecation. We exclude ‘Affected’ clients, since they do not have strong incentives to fix a deprecation warning if they do not update their API, as the method is still functional in their version. The ‘Affected and changing version’ clients of Easymock and Guava largely react to deprecated entities (71% and 65%). For Hibernate and Spring we see a similar minority of clients that react (31% and 32%). For all the APIs the relative number of clients that fix all calls made to a deprecated entity is between 16% and 22%. Out of the clients that react, we find that at the method level, the most popular reaction is to delete the reference to the deprecated method (median of 50% to 67% for Easymock, Guava and Hibernate and 100% for Spring). We define as deletion a reaction in which the deprecated entity is removed and no new invocation to the same API is added. Some Hibernate and Guava clients roll back to a previous version where the entity is not yet deprecated. Easymock, Guava and Hibernate clients tend to replace deprecated calls with other calls to the same API, however this number is small. Surprisingly, a vast majority of projects (95 to 100%) add calls to deprecated API elements, despite the deprecation being already in place. This concerns even the ones that end up migrating all their deprecated API elements later on.
**The strange case of Guice.** We analyzed all the Guice projects and looked for usage of a deprecated annotation or method, however we find that none of the projects have used either. The reason is that Guice does not have many methods or annotations that have been deprecated. In fact, Guice follows a very aggressive deprecation policy: methods are removed from the API without being deprecated previously. We observed this behavior in the Pharo ecosystem as well, and studied it separately [21]. In our next research questions, we thus do not analyze Guice, as the deprecations are not explicitly marked.
**RQ2: What is the scale of reaction in affected clients?**
The work we partially replicate [5] measures the reactions of individual API changes in terms of commits and developers affected. Having exact API dependency information, we can measure API evolution on a per-API basis, rather than per-API element. It is hence more interesting to measure the magnitude of the changes necessary between two API versions in terms of the number of methods calls that need to be updated between two versions. Another measure of the difficulty of the task is the number of different deprecated methods one has to react to: it is easier to adapt to 10 usages of the same deprecation than it is to react to 10 usages of 10 different deprecated methods.
**Actual reactions.** We measure the scale of the actual reactions of clients that do react to API changes. We count separately reactions to the same deprecated method and the number of single reactions. In Figure 1, client 3, after upgrading to v5 and before upgrading to v6, makes two modifications to statements including the deprecated method ‘boo’. We count these as two reactions to deprecation but count one unique deprecated method. We consider that client 5 reacts to deprecation, when rolling back from v3 to v4: we count one reaction and one unique deprecated method.
We focus on the upper half of the distribution (median, upper quartile, 95th percentile, and maximum), to assess the critical cases; we expect the effort needed in the bottom half to be low. Table III reports the results. The first column reports the absolute number of non-frozen affected clients that reacted. The scale of reaction varies: the majority of clients react on less than a dozen of statements with a single unique deprecated method involved. Springs stands out with a median number of
---
Figure 3. Deprecation status of clients of each API.
The previous research question shows that most of the actual and potential reactions of clients to method deprecations involves a few unique methods. This does not tell us how these methods are distributed across all the deprecated API methods. We compute the proportion of deprecated methods clients use.
In Figure 1, there is at least one usage of deprecated methods ‘boo’ and ‘foo’, while there is no usage of ‘goo’. In this case, we would count 3 unique deprecated methods, of which one is never used by clients.
Table V summarizes the results, including the total count of deprecated methods per API with proportion over the total count of methods and the count of how many of these deprecated methods are used by clients. APIs are not shy in deprecating methods, with more than 1,000 deprecations for Guava, Spring, or Hibernate. The case of Hibernate is particularly striking with 65% of unique methods being eventually deprecated, indicating that this API makes a heavy usage of this feature of Java. The proportion of deprecated methods that affect clients is rather low, around 10% in all 4 of the APIs.
RQ4: What is the time-frame of reaction in affected clients?
We investigate the amount of time it takes for a method to become deprecated (‘time to deprecation’) and the amount of time developers take to react to a it (‘time to react’). The former is defined as the interval between the introduction of the call and when it was deprecated, as seen in client 3 (Figure 1); the latter is the amount of time between the reaction to a deprecation and when it was deprecated (clients 3 and 5).
Time to deprecation. We analyzed the time to deprecation for each of the instances where we found a deprecated entity. The median time for all API clients is 0 days. This highlights a startling fact: Most of the introductions of deprecated method calls happen when clients already know they are deprecated. In other words, when clients introduce a call to a deprecated method, it is usually done despite the fact that they know a priori that the call is already deprecated. This indicates that clients do not appear to mind using deprecated features.
Time to react. Figure 4 reports the time it takes clients to react to a method deprecation, once it is visible. We see that, for most clients across all APIs, the median reaction time is low: It is 0 days for Guava, Hibernate, and Spring, while for Easymock it is 25 days. A reaction time of 0 days indicates that
most deprecated method call are reacted upon on the same day the call was either introduced or marked as deprecated. Barring outliers, reaction times in Hibernate and Spring are uniformly fast (the third quartiles being at 0 and 2.5 days). Reaction times are however longer for clients of Guava and Easymock, with an upper quartile of 47 and 200 days respectively. Outliers have a long reaction time, in the order of hundreds of days.
**RQ5: Do affected clients react similarly?**
Replacing a deprecated entity with an invocation to a non-deprecated one is a desirable reaction as the client of an API continues using it. This research question seeks to investigate the clients’ behavior when it comes to replacement reactions.
Such an analysis allows us to ascertain whether an approach inspired by Schäfer et al.’s [22] would work on the clients in our dataset. Their approach recommends API changes to a client based on common, or systematic patterns in the evolution of other clients of the same API.
**Consistency of replacements.** There is no definite way to identify if a new call made to the API is a replacement for the original deprecated call, so we rely on a heuristic: We analyze the co-change relationships in each class file across all the projects; if we find a commit where a client removes a usage of a deprecated method (e.g., add(String)) and adds a reference to another method in the same API (e.g., add(String, Integer)), this new method invocation is a possible replacement for the original deprecated entity. A drawback is that in-house replacements or replacements from other competing APIs cannot be identified. Nonetheless, we compute the frequencies of these co-change relationships to find whether clients react uniformly to a deprecation.
We found that Easymock has no systematic transitions: there are only 3 distinct methods for which there are replacements and the highest frequency of the co-change relationships is 34%. For Guava we find 23 API replacements; in 17% of the cases there is a systematic transition i.e., there is only one way in which a deprecated method is replaced by clients. Spring clients only react by deleting deprecated entities instead of replacing them, resulting in no information on replacements of features. In Hibernate, we find only 4 distinct methods where replacements were made. There were no systematic replacements and the maximum frequency is 75%.
Since API replacements are rather uncommon in our dataset, with the exception of Guava, we find that while an approach such as the one of Schäfer et al. could conceptually be quite useful, we would not be able to implement it in our case due to the small amount of replacement data.
**Quality of documentation.** There are very few clients reacting to deprecation by actually replacing the deprecated call with one that is not deprecated. This led us to question the quality of the documentation of these APIs. Ideally one would like to have a clear explanation of the correct replacement for a deprecated method, as in the Javadoc reported in Figure 5. However, given the results we obtained, we thought this could be not the case. We systematically inspected the Javadoc to see if deprecated features had documentation on why the feature was deprecated, and if there was an indication of appropriate replacement or whether a replacement is needed.
We perform a manual analysis to analyze the quality of the API documentations. For Guava, we investigate all 104 deprecated methods that had an impact on clients; for Easymock, we look at all 16 deprecated methods that had impact on clients; for Spring and Hibernate, we inspected sample of methods (100 each) that have an impact on the clients.
In Easymock, 15 of the 16 deprecated methods are instance creation methods, whose deprecation message directs the reader to using a Builder pattern instead of these methods. The last deprecation message is the only one with a rationale and is also the most problematic: the method is incompatible with Java version 7 since its more conservative compiler does not accept it; no replacement is given.
In Guava, 61 messages recommend a replacement, 39 state the method is no longer needed and hence can be safely deleted, and only 5 deprecated methods do not have a message. It is also the API with the most diverse deprecation
messages. Most messages that state a method is no longer needed are rather cryptic (“no need to use this”). On the other hand, several messages have more precise rationales, such as stating that functionality is being redistributed to other classes. Others provide several alternative recommendations and detailed instructions and one method provides as many as four alternatives, although this is because the deprecated method does not have exact equivalents. Guava also specifies in the deprecation message when entities will be removed (e.g., “This method is scheduled for removal in Guava 16.0”, or even “This method is scheduled for deletion in June 2013.”).
For Hibernate, all the messages provide a replacement, but most provide no rationale for the deprecation. The only exceptions are messages stating the advantages of a recommended database connection compared to the deprecated one.
For Spring, the messages provide a replacement (88) or state that the method is no longer needed (12). Spring is the only API that is consistent in specifying in which version of the API the methods were deprecated. On the other hand, most of the messages do not specify any rationale for the decision, except JDK version testing methods that are no longer needed since Spring does not run in early JDK versions anymore.
Overall, maintainers of popular APIs make an effort to provide their clients with high-quality documentation. We classify this as high quality documentation as there is sufficient support provided to clients. If we found rationales as to why a method was deprecated, this was far from systematic. Despite replacement being not the only suggested solution, it is the most common; this is in contrast to the actual behavior of clients. In spite of the good quality of the documentation, clients are far from likely to follow it.
Summary of findings
We first investigated how many API clients actively maintain their projects by updating their dependencies. We found that, for all the APIs, only a minority of clients upgrade/change the version of the API they use. As a direct consequence of this, older versions of APIs are more popular than newer ones.
We then looked at the number of projects that are affected by deprecation. We focused on projects that change version and are affected by deprecation as they are the ones that show a full range of reactions. Clients of Guava, Easymock and Hibernate (to a lesser degree) were the ones that were most affected, whereas clients of Spring were virtually unaffected by deprecation and for Guice we could find no data due to Guice’s aggressive deprecation policy. We also found that for most of the clients that were affected, they introduced a call to a deprecated entity, despite knowing that it was deprecated.
Looking at the reaction behavior of these clients, we saw that ‘deletion’ was the most popular way to react to a deprecated entity. Replacements were seldom performed, and finding systematic replacements was rarer. This is despite the fact that APIs provide excellent documentation that should aid in the replacement of a deprecated feature. When a reaction did take place, it was usually almost right after it was first marked as deprecated.
IV. DISCUSSION
We now discuss our main findings and contrast them with the findings of the Smalltalk study we partially replicate. Based on this, we give recommendations on future research directions. We also present threats to validity.
A. Comparison with the deprecation study on Smalltalk
Contrasting our results with those of the study we partially replicate, several interesting findings emerge:
Proportion of deprecated methods affecting clients. Both studies found that only a small proportion of deprecated methods does affect clients. In the case of Smalltalk, this proportion is below 15%, in our results we found it to be around 10%. Considering that the two studies investigate two largely different ecosystems, languages, and communities, this similarity is noteworthy. Even though API developers do not know exactly how their clients use the methods they write and would be interested in this information [23], the functionalities they deprecate are mostly unused by the clients, thus deprecation causes few problems. Nevertheless, this also suggests that the majority of effort that API developers make in properly deprecating some methods and documenting alternatives is not actually necessary: API developers, in most of the cases, could directly remove the methods they instead diligently deprecate.
Not reacting to deprecation. Despite the differences in the deprecation mechanisms and warnings, the vast majority of the clients in both studies do not react to deprecation. In this study, we could also quantify the impact of deprecation should clients decide to upgrade their API versions and find that, in some cases, the impact would be very high. By not reacting to deprecated calls, we see that the technical debt accrued can grow to large and unmanageable proportions (e.g., some Hibernate client would have to change 17,471 API invocations). We also found more counter-reactions (i.e., adding more calls to methods that are known to be deprecated) than for Smalltalk clients. This may be related to the way in which the two platforms raise deprecation warnings: In Java, a deprecation gives a compile-time warning that can be easily ignored, while in Smalltalk, some deprecations lead to runtime errors, which require intervention.
Systematic changes and deprecation messages. The Smalltalk study found that in a large number of cases, most clients conduct systematic replacements to deprecated API elements. In our study, we find that, instead, replacements are not that common. We deem this difference to be extremely surprising. In fact, the clients we consider have access to very precise documentation that should act as an aid in the transition from a deprecated API artifact to one that is not deprecated; while this is not the case for Smalltalk, where only half of the deprecation messages were deemed as useful. This seems to indicate that proper documentation is not a good enough incentive for API clients to adopt a correct behavior, also from a maintenance perspective, when facing deprecated methods. As an indication to developers of language platforms, we have some evidence to suggest more stringent policies on how deprecation impacts clients’ run-time behavior.
Clients of deprecated methods. Overall, we see in the behavior of API clients that deprecation mechanisms are not ideal. We thought of two reasons for this: (1) developers of clients do not see the importance of removing references to deprecated artifacts, and (2) current incentives are not working to overcome this situation. Incentives could be both in the behavior of the API introducing deprecated calls and in the restriction posed by the engineers of the language. This situation highlights the need for further research on this topic to understand whether and how deprecation could be revisited to have a more positive impact on keeping low technical debt and improve maintainability of software systems. In the following we detail some of the first steps in this direction, clearly emerging from the findings in our study.
B. Future research directions
If it ain’t broke, don’t fix it. We were surprised that so many projects did not update their API versions. Those that do are often not in a hurry, as we see for Easymock or Guice. Developers also routinely leave deprecated method calls in their code base despite the warnings, and even often add new calls. This is in spite of all the APIs providing precise instructions on which replacements to use. As such the effort to upgrade to a new version piles up. Studies can be designed and carried out to determine the reasons of these choices, thus indicating how future implementations of deprecation can give better incentives to clients of deprecated methods.
Difference in deprecation strategies. In the clients that do upgrade, we can see differences between the APIs. We were particularly impressed by Spring, which has by far the most clients and also the least clients using deprecations. It appears that their deprecation strategy is very conservative, even if they deprecated a lot of methods. This may explain why much more Spring clients do upgrade their API version. Likewise, perhaps the very aggressive deprecation policy of Guice, which removes methods without warnings, has an impact on the vast majority of the clients that decide to stick with their version of Guice. We note that the APIs with the highest proportion of projects with deprecated calls are also the ones where the projects are least likely to upgrade. We did not investigate this further, as our focus was mostly on the behavior of clients, but studies suggesting API developers the best strategies for persuading clients to follow deprecation messages would be very informative for the actual practice of software evolution.
Impact of deprecation messages. We also wonder if the deprecation messages that Guava has, which explicitly state when the method will be removed, could act as a double-edged sword: Part of the clients could be motivated to upgrade quickly, while others may be discouraged and not update the API or roll back. In the case of Easymock, the particular deprecated method that has no documented alternative may be a roadblock to upgrade. Studies can be devised to better understand the role of deprecation messages and their real effectiveness.
C. Threats to validity
Since we do not detect deprecation that is only specified by Javadoc tags, we may underestimate the impact of API deprecation in some cases. To quantify the size of this threat, we manually checked each API and found that this is an issue only for Hibernate before version 4, while the other APIs are unaffected. For this reason, a fraction of Hibernate clients could show not completely correct behavior. We considered crawling the online Javadoc of Hibernate to recover these tags, but we found that the Javadoc of some versions of the API were missing (e.g. version 3.1.9).
Even though our findings are focused on the clients, for which we have a statistically significant sample, some of the results depend on the analyzed APIs (such as the impact of the API deprecation strategies on the clients). As we suggested earlier in this section, further studies could be conducted to investigate these aspects.
The use of projects from GitHub leads to a number of threats, as documented by Kalliamvakou et al. [24]. In our data collection, we tried to mitigate these biases (e.g. we only selected active projects), but some limitations are still present. The projects are all open-source and some may be personal projects where maintenance may not be a priority. GitHub projects may be toy projects or not projects at all (still from [24]); we think this is unlikely, as we only select projects that use Maven: these are by definition Java projects, and, by using Maven, show that they adhere to a minimum of software engineering practices.
Finally, we only look at the master branch of the projects. We assume that projects follow the git convention that the master branch is the latest working copy of the code [25]. However, we may be missing reactions to API deprecations that have not yet been merged in the main branch.
V. RELATED WORK
Studies of API Evolution. Several studies of API evolution have been performed, at smaller or larger scales. Most of these studies focused on the API side, rather than the client one as the one we conducted.
For example, Dig and Johnson studied and classified the API breaking changes in 4 APIs [26]; they did not investigate their impact on clients. They found that 80% of the changes were due to refactorings. Cossette and Walker [27] studied five Java APIs in order to evaluate how API evolution recommenders would perform on these cases. They found that all recommenders handle a subset of the cases, but that none of them could handle all the cases they referenced.
The Android APIs have been extensively studied. McDonnell et al. [28] investigate stability and adoption of the Android API on 10 systems; the API changes are derived from Android documentation. They found that the API is evolving quickly, and that clients have troubles catching up with the evolution. Linares-Vásquez et al. also study the changes in Android, but from the perspective of questions and answers on Stack Overflow [29], not API clients directly. Bavota et al. [30] study how changes in the APIs of mobile apps (responsible
for defects if not reacted upon) correlate with user ratings: successful applications depended on less change-prone APIs. This is one of the few large-scale studies, with more than 5,000 API applications. Wang et al. [31] study the specific case of the evolution of 11 REST APIs. Instead of analyzing API clients, they also collect questions and answers from Stack Overflow that concern the changing API elements.
Among the studies considering clients of API, we find for example the one by Espinha et al. [32], who study 43 mobile client applications depending on web APIs and how they respond to web API evolution. Also, Raemaekers et al. investigated the relation among breaking changes, deprecation, and semantic versioning [33]. They found that API developers introduce deprecated artifacts and breaking changes in equal measure across both minor and major API versions, thus not allowing clients to predict API stability from semantic versioning. Finally, previous work including one of the authors of this paper (i.e., [5] and [21]) are large-scale studies of API clients in the Pharo ecosystem. The first study focused on API deprecations, while the second one focused on API changes that were not marked as deprecations beforehand. Another work [34] analyze deprecation messages in more than 600 Java systems, finding that 64% of deprecated methods have replacement messages.
Mining of API Usage. Studies that present approaches to mining API usage from client code are related to our work, especially with respect to the data collection methodology. One of the earliest works done in this field is the work of Xie and Pei [35] where they developed a tool called MAPO (Mining API usage Pattern from Open source repositories). MAPO mines code search engines for API usage samples and presents the results to the developer for inspection. Mileva et al. [36] worked in the field of API popularity; they looked at the dependencies of projects hosted on Apache and Sourceforge. Based on this information they ranked the usage of API elements such as methods and classes. This allowed them to predict the popularity trend of APIs and their elements. Hou et al. [37] used a popularity based approach to improve code completion. They developed a tool that gave code completion suggestions based on the frequency with which a certain class or method of an API was used in the APIs ecosystem. Lämmel et al. [38] mine usages of popular Java APIs by crawling SourceForge to create a corpus of usage examples that forms a basis for a study on API evolution. The API usages are mined using type resolved Java ASTs, and these usages are stored in a database.
Supporting API evolution. Beyond empirical studies on APIs evolution, researchers have proposed several approaches to support API evolution and reduce the efforts of client developers. Chow and Notkin [39] present an approach where the API developers annotate changed methods with replacement rules that will be used to update client systems. Henkel and Diwan [40] propose CatchUp!, a tool using an IDE to capture and replay refactorings related to the API evolution. Dig et al. [41] propose a refactoring-aware version control system for the same purposes.
Dagenais and Robillard observe the framework’s evolution to make API change recommendations [42], while Schäfer et al. observe the client’s evolution [22]. Wu et al. present a hybrid approach [43] that includes textual similarity. Nguyen et al. [44] propose a tool (LibSync) that uses graph-based techniques to help developers migrate from one framework version to another. Finally, Holmes and Walker notify developer of external changes to focus their attention on these events [45].
VI. Conclusion
We have presented an empirical study on the effect of deprecation of Java API artifacts on their clients. This is a non-exact replication of a similar study done on the Smalltalk ecosystem. The main differences between the two studies is in the type systems of the language targeted (static type vs dynamic type) and the scale of the dataset (25,357 vs 2,600 clients).
We found that few API clients update the API version that they use. In addition, the percentage of clients that are affected by deprecated entities is less than 20% for most APIs—except for Spring where the percentage was unusually low. Most clients that are affected do not typically react to the deprecated entity, but when a reaction does take place it is—surprisingly—preferred to react by deletion of the offending invocation as opposed to replacing it with recommended functionality. When clients do not upgrade their API versions, they silently accumulate a potentially large amount of technical debt in the form of future API changes when they do finally upgrade; we suspect this can serve as an incentive not to upgrade at all.
The results of this study are in some aspects similar to that of the Smalltalk study. This comes as a surprise to us as we expected that the reactions to deprecations by clients would be more prevalent, owing to the fact that Java is a statically typed language. On the other hand, we found that the number of replacements in Smalltalk was higher than in Java, despite Java APIs being better documented. This leads us to question as future work what the reasons behind this are and what can be improved in Java to change this.
This study is the first to analyze the client reaction behavior to deprecated entities in a statically-typed and mainstream language like Java. The conclusions drawn in this study are based on a dataset derived from mining type-checked API usages from a large set of clients. From the data we gathered, we conclude that deprecation mechanisms as implemented in Java do not provide the right incentives for most developers to migrate away from the deprecated API elements, even with the downsides that using deprecated entities entail.
Given that there is currently a proposal to revamp Java's deprecation system,4 studies such as this one and its potential follow-ups are especially timely.
4https://bugs.openjdk.java.net/browse/JDK-8065614
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/10297101/icsme.pdf", "len_cl100k_base": 9246, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39290, "total-output-tokens": 13007, "length": "2e13", "weborganizer": {"__label__adult": 0.0003707408905029297, "__label__art_design": 0.00026106834411621094, "__label__crime_law": 0.0002682209014892578, "__label__education_jobs": 0.0007433891296386719, "__label__entertainment": 4.851818084716797e-05, "__label__fashion_beauty": 0.00012254714965820312, "__label__finance_business": 0.00015819072723388672, "__label__food_dining": 0.0002455711364746094, "__label__games": 0.0004394054412841797, "__label__hardware": 0.0003337860107421875, "__label__health": 0.0002624988555908203, "__label__history": 0.000164031982421875, "__label__home_hobbies": 4.51207160949707e-05, "__label__industrial": 0.0001571178436279297, "__label__literature": 0.0002428293228149414, "__label__politics": 0.00020301342010498047, "__label__religion": 0.00034546852111816406, "__label__science_tech": 0.0017595291137695312, "__label__social_life": 8.690357208251953e-05, "__label__software": 0.0046234130859375, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00021827220916748047, "__label__transportation": 0.0002467632293701172, "__label__travel": 0.0001462697982788086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54407, 0.03465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54407, 0.08107]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54407, 0.92622]], "google_gemma-3-12b-it_contains_pii": [[0, 779, false], [779, 5898, null], [5898, 12312, null], [12312, 12552, null], [12552, 14659, null], [14659, 20221, null], [20221, 22695, null], [22695, 27045, null], [27045, 33487, null], [33487, 39684, null], [39684, 45780, null], [45780, 54407, null]], "google_gemma-3-12b-it_is_public_document": [[0, 779, true], [779, 5898, null], [5898, 12312, null], [12312, 12552, null], [12552, 14659, null], [14659, 20221, null], [20221, 22695, null], [22695, 27045, null], [27045, 33487, null], [33487, 39684, null], [39684, 45780, null], [45780, 54407, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54407, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54407, null]], "pdf_page_numbers": [[0, 779, 1], [779, 5898, 2], [5898, 12312, 3], [12312, 12552, 4], [12552, 14659, 5], [14659, 20221, 6], [20221, 22695, 7], [22695, 27045, 8], [27045, 33487, 9], [33487, 39684, 10], [39684, 45780, 11], [45780, 54407, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54407, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
548a6e47143478ffde35db1639d019feabad568f
|
Design Patterns for Mixed-Method Research in HCI
Koen van Turnhout, Arthur Bennis, Sabine Craenmehr, Robert Holwerda, Marjolein Jacobs, Ralph Niels, Lambert Zaad, Stijn Hoppenbrouwers, Dick Lenior, René Bakker.
HAN University of Applied Sciences
Academy of Information Technology and Communication
[ABSTRACT]
In this paper we discuss mixed-method research in HCI. We report on an empirical literature study of the NordiCHI 2012 proceedings which aimed to uncover and describe common mixed-method approaches, and to identify good practices for mixed-methods research in HCI. We present our results as mixed-method research design patterns, which can be used to design, discuss and evaluate mixed-method research. Three dominant patterns are identified and fully described and three additional pattern candidates are proposed. With our pattern descriptions we aim to lay a foundation for a more thoughtful application of, and a stronger discourse about, mixed-method approaches in HCI.
Author Keywords
Mixed-method research; methodology; triangulation.
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.
INTRODUCTION
The work presented in this paper is part of a long-term research effort which addresses mixed-method research in the context of multidisciplinary HCI research and of research education for HCI professionals. Mixed-method research is common in HCI [37], but there is little literature to support the design of mixed-method studies in our field. Also, we notice authors do not typically refer to their research as mixed-method research. Specifically, they tend not to make explicit how the components of their research, often borrowed from several contributing disciplines, fit together – ideally in such a way that ‘the whole is more than the sum of its parts’ [3,7]. We do think HCI researchers make sound pragmatic decisions when applying a mixed-method approach, but their practice of mixing methods does not seem to be matched by explicit underlying considerations. We believe that closing this practice-theory gap would help to better teach, discuss, design and evaluate the mixed-method research approaches which our community uses.
A possible cornerstone of a theory for mixed-method research design is the Development Oriented Triangulation (DOT) framework [37], which offers a classification of methods organized around trade-offs researchers need to make in their research planning. We used this framework in an empirical literature study, which is a common approach for addressing methodological questions like the ones addressed in this paper - see for example [7,29,36]. Through an in-depth analysis of a large portion of the NordiCHI 2012 proceedings, we identified common method mixes and the types of problem they addressed. We identified best practices for each mix through a comparison and critical discussion of papers adopting a similar mix.
We have chosen to represent our findings as mixed-method research design patterns. Originated by Christopher Alexander [1], design pattern languages have become a popular way to represent middle level design knowledge [32], which is used for software architecture [21] and interaction design [3], among other fields. Though we adopted the format of design patterns for mixed-method research designs, the results of our study can only provide a first step towards a full-fledged pattern language.
This paper is organized as follows. We first discuss existing work about mixed-method approaches within HCI, Information Systems and the Social Sciences. Next, we provide an in-depth discussion of the DOT-framework and some of its foundations. We then turn to the setup and results of the empirical literature study and we discuss the common patterns we found. We finish the paper with conclusions and a discussion of the work done.
RELATED WORK
Many authors place the field of HCI at the crossroads of several branches of science, engineering and design [1, 18, 20, 26, 34, 39]. Historically, science and engineering may have been the most dominant cultures in HCI [1,18,29,38], but recent years a successful emancipatory movement has made a case for design and design research as a means to
In response to the linguistic confusion and tensions between the several contributing disciplines in HCI [14], several authors have argued for a cross-disciplinary methodology in HCI [26, 28, 33]. Inspiration for such a methodology can be drawn from the social sciences, where mixed-method approaches have shown to be capable of overcoming the ontological (objects of study), epistemological (approaches for knowledge production) and axiological (values in knowledge production) differences which were fiercely debated during the ‘period of the paradigm wars’ [3,12]. Currently, mixed-method designs with solid knowledge-theoretical underpinnings do exist for the social sciences [12, 16]. However, while many of the results in social science literature, such as common reasons for mixing methods [7, 16], may be appropriate for HCI, the foundations of this work have to be reconsidered thoroughly. A core difference between social sciences and HCI, for example, is the status of theory. Being a design-oriented field, HCI strives to combine descriptive and prescriptive theory [17,22] and recognizes (annotated) artefacts as a legitimate form of knowledge [8,15,17,22,25,29,34,39,40]. This has consequences for the knowledge production practices and the way we cluster them. It is hard to imagine how thinking of a mixed-method design as a combination of qualitative and quantitative methods—as it is defined in most social science textbooks—can relieve the common tension between understanding oriented work and creation oriented work [6,14] (to name but one example). To advance mixed-method approaches for HCI, mixed-method theory has to be developed, including an ontology, epistemology and axiology fitting our field. The DOT-framework, to which we turn next, is an effort to do just that.
DEVELOPMENT ORIENTED TRIANGULATION
Overview of the DOT-framework
Figure 1 shows the DOT-framework. It identifies research strategies, which are organized along 3 central trade-offs which HCI researchers face when choosing one method over the other.
First layer: Two domains for HCI Research
In the ontological top layer, the DOT-framework follows Hevner et al. [22] and Mackay and Fayard [26] in identifying two domains of study for HCI-professionals [37]. Both domains are a resource for research as well as an opportunity space for change. The first domain is the application domain: HCI researchers need to learn how humans interact with computers and they aim to change this interaction for the better. The second domain is the domain of available work. This consists of existing artefacts, theories and models which a researcher can access to. HCI researchers study available work and contribute to it. All HCI research activities take place in the innovation space between these two domains. The DOT framework thus casts HCI research as an “organized learning activity which is instrumental to an innovation or development challenge and, as such, brokers between the domain of available work and the application context” [37].
Figure 1: Overview of the DOT-framework showing five types of research, related to two domains and three fundamental trade-offs.
Second layer: Three trade-offs
The second layer of the DOT-framework is axiological in nature. It identifies three trade-offs between basic values in research design which cannot be optimized simultaneously and thus need to be triangulated [26,37].
Rigor or Relevance?
The distinction between the two domains in the top layer directly translates into a research trade-off. Hevner et al. [22] propose that there are multiple ‘research cycles’. The researcher learns about and changes the application domain in the relevance research cycle. In the rigor research cycle, in contrast, researchers learn about and contribute to available work. Thus defined, rigor and relevance need to be triangulated: they are both long-term goals for HCI research, but they are hard to optimize simultaneously within a single research strategy.
Certainty or Completeness?
Second, the framework identifies the trade-off between certainty and completeness. This is taken from [35] who distinguish between the concerns for ‘precision of measurement’ and ‘system character of context’. According
to [35], researchers choosing precision of measurement would need to use laboratory experiments or judgment tasks while researchers who value the system character of context would use field studies or ethnography instead.
**Inspiration or Data?**
The third trade-off in the DOT-framework is between those approaches that require researcher involvement and subjectivity (called *inspiration-oriented* approaches in the framework) and those that view the researcher as independent observer of reality (called *data-oriented* approaches). Inspiration-oriented approaches have also been labeled ‘intuitive’ [14], ‘phenomenological’ [13, 20] or ‘creative’ design [39]. Data-oriented approaches are also known as ‘analytic’ [14], ‘positivistic’ [13] or ‘engineering’ design [39].
**Third Layer: Five research strategies**
Third, in the epistemological layer, the DOT-framework replaces the distinction between qualitative and quantitative methods as used in the social sciences by five distinct research strategies for HCI. These are aligned with the trade-offs in the second layer of the framework, leading to a close mapping of epistemology and axiology - also found in [35]. This connection between the second and third layer sets the classification of methods in the DOT-framework apart from other classifications of methods (see [25] for an overview), which lack such an underpinning. Nevertheless, the final classification is close to that of [30].
**Library**
Library methods enable researchers to learn about available work related to their research question. Literature studies, the creation of a benchmark and competition analyses are typical library studies. Library studies may be inspirational or data oriented, and they aim to get a better connection with and an overview (completeness) of available work (rigor) which is relevant to the research problem.
**Field**
Field methods, often borrowed from interpretive social science [31], aim to capture the context of design [25], or in the terms of the DOT-framework: to get a complete understanding (completeness) of the application domain (relevance) of the development effort. Some field methods, such as contextual inquiry, show a strong reliance on data gathering and analysis while others such as cultural probes are explicitly optimized to be inspirational for the designers involved. Field methods are found in [30] as ‘observation’, and in [25] as ‘field methods’.
**Workshop**
Within the DOT-framework workshop methods are defined as methods which aim to conceive or improve the solution without a direct reference to the domain of available work or application context. Most software engineering disciplines [22,31] and the ‘research through design’ community [25,40] consider creating artifacts an important part of research and development efforts. The creative design [39] tradition has developed many inspiration-oriented methods such as ideation methods and morphological maps, while engineering design [39] typically relies on more analytical workshop methods such as optimization metrics, iterative improvement of the system performance, or code refactoring. Workshop methods are more narrowly defined in [30] as ‘systems development’.
**Lab**
Lab studies aim to test (certainty) a proposed solution, against aspects of (or goals for) the application domain (relevance). They typically involve some form of empirical manipulation, if only as lightweight as asking users to try a prototype of a new system. Lab studies complement field studies with their concern for the application domain, but field studies are more suitable for getting an overview, while the lab studies aim at optimizing the certainty of the outcomes through controlled experimentation. Most usability evaluations are lab studies. In [25] the authors use the term ‘lab study’ in the same fashion as we do, while [30] calls it ‘experimentation’.
**Showroom**
The DOT-framework identifies showroom methods as methods that help to make specific work (certainty) more reusable by other researchers (rigor). One example is the explicit comparison of the performance of an algorithm, with a benchmark. Another example is a critique of a (finished) design in relation to existing work; as this helps others to assess the potential of the proposed solution for their problem. The creation of design frameworks or guidelines intended to highlight considerations that go beyond individual designs are also labeled ‘showroom methods’. In [30] the combination of library and showroom studies is called ‘theory building’; in [25] the term ‘showroom’ is used, be it in a somewhat narrower sense.
**AN EMPERICAL LITERATURE STUDY**
**Setup**
The basis for the work presented in this paper is a classification of an a-select sample of the NordiCHI 2012 proceedings using the DOT-framework. Scholarly papers are often used as a shortcut to understand scholarly practice (e.g. [7,29,36]). Although academic papers may represent a slightly stylized version of the actual work, we had enough confidence in the transparency realized by the authors to assume that all relevant details that we needed for defining patterns could be found in the papers. We chose NordiCHI because it is one of the larger European conferences on HCI, thus providing a fair selection of the HCI research in Europe in proceedings of manageable proportions (100 papers in total). It was also the most recent European HCI conference when we first started this study. The study was carried out by a group of 7 raters. All raters were staff members of our university involved in research, teaching, or both. In an earlier study [37] a smaller sample of 10 papers of the same proceedings were examined as a first test of the applicability of the DOT-framework for this
purpose. Only one rater of the current group was also involved in that study.
The first phase of the study comprised of a reexamination of the ten papers which were studied in [37]. This allowed us to critically reexamine the reported triangulation paths of this study, to get acquainted with the task of classifying papers with the DOT-framework and to sharpen our procedures. In this phase at least four raters read each paper and extensively discussed their interpretations to reach consensus on the triangulation paths reported in the paper. The resulting consensus differed from the interpretation in [37] for two papers, and only on minor points. Nevertheless, substantial differences in the ‘first reading’ of the papers existed, and we found it necessary to adjust procedures followed in [37] to increase the ease of replication.
As in [37], we considered the narratives in the papers as a report of a triangulation path: a string of smaller and bigger chunks of research which can be labeled with the research strategies of the DOT-framework and which typically ‘crosses’ one or more of the trade-offs of the framework. Reconstructing this path from a paper is not trivial. Paper narratives transcend several layers of abstraction. For example: a paper may have the goal to deliver a novel theory (showroom) about HCI practitioners in the field (Figure 2, top layer), which is then realized at a lower level of abstraction by combining a field, library and showroom study (Figure 2, middle layer). On a yet lower level of abstraction (Figure 2, bottom layer) this may involve activities which could be described as workshop and lab activities.
Figure 2: Papers can be read at different levels of abstraction, each rendering its own ‘reading’ of the triangulation path. In practice the layered picture drawn here is an idealization, too.
To arrive at a uniform classification of the triangulation path, we need to reconstruct these layers in the exegesis of the paper. This is complicated by the fact that not all research activities are equal in size and (seemingly) in importance to the authors. As [37] report, the separate chunks in a research paper may differ in size and may be reported incompletely (e.g. reporting results only, rather than the complete research cycle). In [37] these problems were addressed by making a distinction between the thick path (main chunks of research) and thin path (concerns that were addressed in a lightweight manner) of the paper. This distinction, however, turned out to be sensitive to disciplinary bias of the raters and the decision whether a chunk should be considered ‘not present’, ‘thick’, or ‘thin’ was a source of debate. Therefore a sharpened procedure and more objective criteria for the ‘weight’ of a chunk were put in place.
To address the layered nature of papers we made a prediction of the triangulation path based on the abstract and introduction of the paper, which then was verified with a close reading of the full paper. This procedure sensitized raters for the hierarchies as they were constructed by the authors of the papers (the principle of ‘authors’ intent’). Also we agreed not to divide a fully reported research cycle into multiple chunks. For example, if authors gathered field data and described how they processed it to arrive at conclusions, this was considered as part of the field study and not as new chunk of research in need of classification. Also, we replaced the ‘thin’ and ‘thick’ distinction with a set of four, less ambiguous, criteria: the SPIM criteria (Table 1).
Table 1: The SPIM criteria for deciding on the 'weight' of a chunk of research.
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>S</td>
<td>Separate Contribution</td>
</tr>
<tr>
<td>P</td>
<td>Paper Space</td>
</tr>
<tr>
<td>I</td>
<td>Internal Impact</td>
</tr>
<tr>
<td>M</td>
<td>Method Transparency</td>
</tr>
</tbody>
</table>
With separate contribution (S) we refer to the idea that a chunk of research can be seen by the authors as an independent contribution to the field of HCI, for example if they mention it as such in the abstract or introduction. With paper space (P) we refer to authors showing they find a part of their work important by dedicating paper space to it (the criterion was scored when authors dedicated more than 1/5th of the paper to the research chunk). Internal impact (I) refers to the idea that ideally, in a mixed-method paper, each chunk of research has an influence on other chunks. Internal Impact was scored when it was possible to infer how a chunk impacted the rest of the study. The final criterion was method transparency (M) which we used to indicate whether the authors were transparent about the methods they used to answer the questions of their study. Together the SPIM scores gave us a much more objective and nuanced view on how authors dealt with a chunk of research than in the earlier study, i.e. [37].
Another difficulty with the classification of papers is dealing with chunks that show characteristics of multiple research strategies. For example, co-design workshops typically focus on creating new solution directions. Stakeholders from the application domain are often present both to give ‘field’ input, and to ‘validate’ early solutions. In participatory design settings, these workshops are often executed on site. Thus, co-design workshops share
characteristics with both field studies and lab studies. Similar ‘confusions’ arose between lab and field (when an intervention was lightweight, for example) and showroom and workshop (some raters scored the ‘construction’ of a framework as workshop, while others preferred showroom). We documented the disagreements and consensus decisions and this documentation was consulted by the other raters in case of doubt.
Within the revised setup we then set out to classify a set of 30 more papers from the NordiCHI proceedings. Each paper was rated by two raters from our group, in new pairs for each paper. The two raters classified the paper independently reached consensus on the triangulation path. The consensual outcome and possible remaining discussion points were discussed in the whole group. All raters had read and classified the abstract and introduction, giving them enough background to act as a sparring partner for the raters who had read the full paper. Cohen’s kappa was calculated for all independent ratings (before the consensus meeting). Kappa was calculated by treating all 25 decisions a rater had to make: the four SPIM decisions for all research strategies, as well as the decision whether a study should be regarded as inspiration or data oriented - as independent nominal decisions. This gave a somewhat conservative estimate of agreement, but we found it more suitable than, for instance, combining the SPIM criteria in an ordinal agreement measure. This procedure led to an estimated value of Kappa (Fleiss Kappa, with Conger’s correction) of $\kappa = 0.70$ which can be considered good [19]. Consensus was also easily reached in most cases suggesting that remaining differences in first judgment were small.
Results
To give an overall idea of the classifications, Table 2 lists the number of times a research strategy was identified in a paper, and whether it was scored as an inspiration or data oriented method. The table shows all research strategies were found frequently in our dataset and a fair division between inspiration and data oriented approaches was found. It appears that the three dimensions of the framework give a balanced coverage of research approaches in HCI.
Table 2: totals for scored research strategies in the dataset.
<table>
<thead>
<tr>
<th></th>
<th>Inspiration</th>
<th>Data</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Library</td>
<td>29</td>
<td>9</td>
<td>38</td>
</tr>
<tr>
<td>Field</td>
<td>6</td>
<td>10</td>
<td>16</td>
</tr>
<tr>
<td>Workshop</td>
<td>16</td>
<td>8</td>
<td>24</td>
</tr>
<tr>
<td>Lab</td>
<td>11</td>
<td>15</td>
<td>26</td>
</tr>
<tr>
<td>Showroom</td>
<td>14</td>
<td>10</td>
<td>18</td>
</tr>
</tbody>
</table>
Not all research strategies scored equal on all SPIM criteria. Table 3 shows separate counts for those (note that multiple SPIM values are common for each scoring of a research strategy).
Table 3: SPIM criteria for research strategies
<table>
<thead>
<tr>
<th></th>
<th>Separate contribution</th>
<th>Paper Space</th>
<th>Internal impact</th>
<th>Method transparency</th>
</tr>
</thead>
<tbody>
<tr>
<td>Library</td>
<td>14</td>
<td>9</td>
<td>37</td>
<td>6</td>
</tr>
<tr>
<td>Field</td>
<td>11</td>
<td>8</td>
<td>15</td>
<td>14</td>
</tr>
<tr>
<td>Workshop</td>
<td>22</td>
<td>16</td>
<td>22</td>
<td>14</td>
</tr>
<tr>
<td>Lab</td>
<td>20</td>
<td>16</td>
<td>16</td>
<td>18</td>
</tr>
<tr>
<td>Showroom</td>
<td>23</td>
<td>14</td>
<td>14</td>
<td>9</td>
</tr>
</tbody>
</table>
It is shown that, while field, workshop and lab studies have a good distribution across the SPIM criteria, library studies and showrooms do not. Showroom and library score low on method transparency, indicating that authors typically mention results of these studies (such as reviews of existing literature, novel guidelines or frameworks), but not how they arrived at their results. The low frequency for the I criterion for showroom is explained by the fact that showroom is typically found in the last section of the paper, for which we did not score this criterion.
TOWARDS RESEARCH DESIGN PATTERNS
Approach
Having decided on the full triangulation paths of all papers, we clustered the papers which had a similar triangulation path. We also looked at the contents of the papers, using the contribution types in HCI as discussed by Newman [29], Cockton [10] and the CHI 2013 organization [9]. This led to a clustering of papers within pattern proposals. We placed papers which were arguably similar in, eventually, six pattern proposals. It turned out that three quarters of the papers in the set were covered by only three dominant patterns. Only two papers could not be classified in any of the six pattern proposals. Table 4 lists the division of papers across (candidate) patterns.
Table 4: Patterns which were found
<table>
<thead>
<tr>
<th>Pattern name</th>
<th>NO papers</th>
</tr>
</thead>
<tbody>
<tr>
<td>Rigor Cycle</td>
<td>9</td>
</tr>
<tr>
<td>Validated Solution</td>
<td>11</td>
</tr>
<tr>
<td>Field Reframing</td>
<td>10</td>
</tr>
<tr>
<td>Parameter Discovery</td>
<td>4</td>
</tr>
<tr>
<td>Transformative Design</td>
<td>3</td>
</tr>
<tr>
<td>Relevance Cycle</td>
<td>1</td>
</tr>
<tr>
<td>Unclassified</td>
<td>2</td>
</tr>
</tbody>
</table>
To make the step from clusters of papers to patterns –which are partially prescriptive– the sets of papers belonging to a pattern proposal were revisited and discussed thoroughly in the group of 7 raters. For our pattern descriptions it was important to decide why strategies were combined in the papers that we studied. Therefore we maintained a list of possible combination goals. We revised the list several
times, striving to balance comprehensiveness, precision and parsimony and by comparing it with the lists offered for the social sciences by [7,16]. Table 5 shows our final set of combination goals. Although we could fit the reasons for combining methods to our set, we do not claim the list is complete for all HCI studies in general.
Table 5: Reasons for combining methods in HCI
<table>
<thead>
<tr>
<th>Shorthand</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Niche</strong></td>
<td>One study delineates or identifies a space which can be filled in a later study. A niche can be <em>explored</em>, <em>filled</em>, or <em>illustrated</em> (with a concrete example of a general idea).</td>
</tr>
<tr>
<td><strong>Proposition</strong></td>
<td>A study delivers a result, insight or prototype which can be tested or expanded in a follow-up study. A proposition can be <em>tested</em>, <em>validated</em>, <em>positioned</em> or <em>expanded</em> (placed in a broader context)</td>
</tr>
<tr>
<td><strong>Framing</strong></td>
<td>A study delivers context, a corresponding background understanding, a more or less coherent way of thinking about a problem. A frame can be <em>illustrated</em>, <em>transformed</em>, or <em>expanded</em>.</td>
</tr>
<tr>
<td><strong>Content</strong></td>
<td>A study is done to collect concrete materials such as a dataset or test-setup which can be used in a follow up study. Content can be <em>analyzed</em>, or <em>used</em>.</td>
</tr>
<tr>
<td><strong>Guidance</strong></td>
<td>One study delivers insights which help to set up a follow up study. Guidance can be <em>followed</em>.</td>
</tr>
</tbody>
</table>
Finally, we compared the papers in each pattern to the standards that can be derived from the DOT-framework. In particular we looked at whether triangulation across one or more of the trade-offs in the framework occurred within patterns and we considered to what extent the patterns could be combined with other, complementary, patterns. Moreover, we consulted external standards for best practice research as given by the CHI organization [9] and others [17,22,25,29]. Matching the existing principles and standards with our experiences in reading the papers resulted in pragmatic research standards which would fit each pattern. In the next section we discuss the dominant patterns which were the result of this effort.
THREE DOMINANT PATTERNS
Validated Solution
**Use when**
We define *validated solutions* as studies which propose new artefacts, infrastructures or interaction techniques. Starting point for this approach can be an unsolved problem, deficiencies in existing solutions, or the need to illustrate a novel vision or idea with concrete examples.
**Why**
The validated solution is an effective way to further the state of the art and to ‘push’ new ideas and interaction techniques. It can efficiently bridge rigor and relevance, although some lab studies fail to touch the ‘real’ application domain if this is not clearly defined upfront.

**How**
Figure 3 shows the typical triangulation path for a validated solution. A library study can be used to provide context about related solutions and to set a scope (identify a niche) for the rest of the study. In a workshop study, a solution is developed as an *illustration* of the ideas which have been the starting point for the work. The solution then acts as a *proposition* which is *tested* with users in a lab study. In [f], for example, the feasibility of location based voice messaging (identified niche) is demonstrated by building a prototype (proposition) and testing it on technical reliability and acceptability for users (validation).
For this triangulation path it is important that the authors provide argumentation about the perceived advantages of the intended solutions – usually in response to deficiencies of existing solutions [9,29]. The solution needs to be described in sufficient detail, so others can replicate it [9,17]. The validation needs to be rigorous [9], which means it is ideally data oriented. For effective triangulation of completeness and certainty it is important that the solution is evaluated against the perceived advantages which were outlined at the beginning and arise from existing work. Proper triangulation of rigor and relevance suggests the lab study should mimic the intended context of use as closely as possible. If the work is more explorative in nature, these standards may be applied less rigorously, but we advise to add a showroom study to prepare a follow-up by other researchers.
**Special Cases and Combinations**
A showroom study is sometimes added to the validated solution and we recommend this in particular for more explorative papers. The lab study can be split into multiple lab studies, of which some focus more on the evaluation of robustness of the system and others on the subjective experience of users. The pattern can be combined with the
rigor cycle and act as a good follow-up on the field reframing pattern.
Paradigm Papers
The validated solution pattern was found in 11 of the 40 papers. The aforementioned study [f] uses multiple lab studies. In Dalsgaard et. al. [b], a validated solution approach for 3D tangible tabletops is combined with a showroom study aimed at explicating design considerations.
Rigor Cycle
Use when
The rigor cycle can be used to explore solutions and to weed out problems in existing work. Within our dataset we found the rigor cycle approach for two types of papers. First we found it for improved methods in which deficiencies in existing methods are identified, solved and evaluated against the literature. Second, we found it in papers that are in the early stages of exploring novel ideas.
Figure 4: Triangulation path for a rigor cycle paper in HCI.
Why
The rigor cycle approach relates new work explicitly to existing work, enabling a long term development of the state of the art. Both types of rigor cycle papers as found in our set appeared to provide a sensible alternative for the validated solution approach. User validation of improved methods is cumbersome and when clear requirements or arguments for a new method can be formulated based on existing work, a user validation may not always be necessary. In explorative design studies, understanding the proposed solutions from the point of view of available work may be more urgent than validating these (immature) solutions with users.
How
Figure 4 shows the triangulation path for the rigor cycle, including the reasons for mixing methods. Typically a library study identifies a niche which is illustrated with the solution as invented in the workshop. Workshop studies in this pattern can involve users in some form, i.e. through an embedded field study, or by involving stakeholders in co-design activities. The result of the workshop study is a proposition which can be positioned against available work, for example by showing how existing deficiencies are solved in the solution.
Triangulation of completeness and certainty can be achieved by identifying requirements for the solution (or method) and evaluate the result explicitly against those requirements. The design of the solution ought to be described in a replicable (thus data oriented) way [9,17]. Triangulation of rigor and relevance is at risk in this pattern, so user involvement in the workshop study is necessary. This can be done by involving users in a co-creation session or by using a real world dataset as content in the workshop. Exploratory research efforts can take more liberty in the way several activities within the pattern are executed, and for those an inspiration oriented approaches may even be preferable. If so, the methods need to be clearly described and a well-executed showroom study is vital for an effective contribution to the field.
Special Cases and Combinations
A field or lab study could be added to make sure the work has a better fit with the application domain. Papers oriented at improving methods can use a field study as content for the workshop. The validated solution pattern forms a natural complement to the rigor cycle pattern.
Paradigm Papers
The rigor cycle was found in 9 of the 40 papers. A good example of a rigor cycle approach to improving methods is found in [g], as it shows how correspondence analysis can improve certain aspects of the persona segmentation process. A particularly interesting paper of the exploratory kind is [e] who applied the pattern several times to arrive at guidelines for technology enhanced dance performances.
Field Reframing
Use when
Field reframing can be used when a particular context of use is of interest but not yet studied from a particular point of view. In our dataset we found it was used (1) to understand HCI professionals in the field, (2) to understand users working with emergent technologies (such as novel uses of smartphones) and (3) as a starting point for the design of novel interfaces. We found understanding users and theory to be the most common categories [9] for field reframing papers.
Why
The field reframing pattern can deliver generic findings which are useful for many design problems. The pattern can also bring in the ‘real’ world, or ‘user pull’ which is lacking in validated solutions and rigor cycles.
How
Field reframing papers typically use a library study to understand kernel theories about a problem area, and to identify a niche for the field study, or to frame it. This
The inspiration lab approach is convincingly applied in [c] who studied remote assistance configurations. Their work also features a solid showroom study. The combination of a field reframing study with a validated solution approach is [d] in their work on hybrid augmented reality experiences.
THREE CANDIDATE PATTERNS
In this section we discuss the patterns for which we did not have enough papers to base a description on, but for which we were confident enough to assume a pattern could be formed – given enough data.
Parameter Discovery
A small batch of lab-centric papers investigated specific hypotheses or uncovered parameters which were useful groundwork for systems development or theory testing. These papers used library and lab as most important research strategies, sometimes combined with lightweight field or workshop studies. The papers in this pattern did not deliver novel theory (such as in the field-reframing pattern) and no new applications. All could be described as closed design canvas [10] papers.
Transformative Design
Creswell & Clark [12] refer to transformative designs as studies that have an emancipatory agenda on top of the scientific agenda. Studies like this are found HCI, in particular in ‘the Scandinavian school of participatory design’ [e.g. 6]. We found only three papers in which such a social agenda was the most important contribution of the paper. These papers were hard to classify. They presented interventions such as introducing a design activity or system, in a context, which served multiple goals. From an HCI point of view they aimed at trying or testing the proposed method or solution. From a social point of view (which was clearly important to the authors) the same methods could be seen differently.
Relevance Cycle
Considering the status of user-centered design in our community it was somewhat surprising to find only one paper in our set featuring a full relevance cycle: field-workshop-lab. This may show an academic bias, which is not present in industry: solutions following a full relevance cycle are considered of less interest to academia from the perspective of the growth of knowledge.
CONCLUSIONS & DISCUSSION
Within this paper we have shown that much of the research as it is done in HCI can be described as mixed-method research within the DOT-framework. We have found indications for how different research aims, and corresponding challenges, are supported by specific mixed-method research designs. The patterns consolidate the pragmatic research practices of HCI researchers. Moreover, apart from the typology of research strategies as it is proposed by the DOT-framework, we have developed terminology to describe how the individual methods of a mixed-method setup can fit together – thus supporting discourse about the coherence of a research approach.
Although, unsurprisingly, the dominant patterns as we found them appear to correspond to the traditions which are typically reported to form the makeup of HCI: science (field reframing, parameter discovery), engineering (validated solution) and design (rigor cycle), the DOT-framework places the approaches of these different traditions under the umbrella of a single theory, which may improve their combinability. Indeed, apart from describing how the different research strategies within a pattern can be coherently combined, we were able to suggest how patterns as a whole could be optimized for a follow up with a different pattern. This brings clarity about the intersections of different approaches, on a concrete methodological level, and it allows for well-founded debate on the pragmatics of cross-disciplinary research approaches.
Overall, the DOT-framework fitted the actual work, as it was reported in our sample of the NordiCHI 2012 proceedings well. All five research strategies were found frequently and most papers combined approaches directed at rigor and relevance, overview and certainty, and inspiration and data; triangulation appears to be the norm in HCI research, not the exception. From our data it seems that library and showroom studies are typically not regarded as ‘studies’ by HCI researchers. The sections containing related work or new theory and guidelines - additions to the domain of existing work, were often ‘results only’ sections. As these sections form an important part of the make-up of the paper, we feel our community has much to gain from a more methodological execution and reporting of library and –in particular– showroom studies.
We were initially surprised to find only one paper with a full relevance cycle (field, workshop, lab) in our dataset. Considering the continued dedication of HCI textbooks to the user-centered design cycle [11], which corresponds closely to the relevance cycle, one might expect successful examples of such an approach to be present in our conferences. There can be multiple reasons for the omission. One hypothesis is that, in a future oriented discipline, case-studies turn out to be an inefficient way to progress the state of the art, while it is still vital to the practice of HCI outside the scientific research community. This would have to be tested, for example by applying the DOT-framework to HCI work such as it executed in the industry practice of web, game and application design. Such a study about research pragmatics may uncover other biases in our current study such as the possible linearization of highly iterative research practices [27] through the processes of scientific storytelling.
Much in the way contemporary work on mixed-method research in the social sciences strikes a middle ground between the axiological debates of the paradigm wars and a vague ‘anything goes’ form of pragmatism (as Bergman [3] puts it). Our work provides a third route between those extremes. Although the patterns as we describe them here need to evolve further and will have to withstand the test of thoughtful application by researchers, in this paper we have already shown the outlines of a theory based, practice informed approach to research pragmatics in our multidisciplinary field.
**REFERED PAPERS FROM THE DATASET**
These papers appeared in *Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design (NordiCHI ’12)*. ACM, New York, NY, USA.
- Brown, J. M., Lindgaard, G. and Biddle, R. Joint implicit alignment work of interaction designers and software developers. 693-702
- Laporte, L., Slegers, K. and De Grooff, D. Using Correspondence Analysis to monitor the persona segmentation process. 265-274.
**OTHER REFERENCES**
14. Fallman, D. The interaction design research triangle of design practice, design studies, and design exploration. Design Issues, 24, 3 (2008), 4-18.
21. Helm, R., Johnson, R., Vlissides, J. and Gamma, E. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 2002.
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/132800/132800.pdf?sequence=1", "len_cl100k_base": 8770, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 35612, "total-output-tokens": 10943, "length": "2e13", "weborganizer": {"__label__adult": 0.002010345458984375, "__label__art_design": 0.1158447265625, "__label__crime_law": 0.0010776519775390625, "__label__education_jobs": 0.19091796875, "__label__entertainment": 0.0009088516235351562, "__label__fashion_beauty": 0.0014677047729492188, "__label__finance_business": 0.0015249252319335938, "__label__food_dining": 0.001300811767578125, "__label__games": 0.002910614013671875, "__label__hardware": 0.0030517578125, "__label__health": 0.003040313720703125, "__label__history": 0.00354766845703125, "__label__home_hobbies": 0.0005512237548828125, "__label__industrial": 0.001575469970703125, "__label__literature": 0.01194000244140625, "__label__politics": 0.000942230224609375, "__label__religion": 0.0025234222412109375, "__label__science_tech": 0.28955078125, "__label__social_life": 0.0007662773132324219, "__label__software": 0.02069091796875, "__label__software_dev": 0.33984375, "__label__sports_fitness": 0.0011625289916992188, "__label__transportation": 0.002216339111328125, "__label__travel": 0.0007004737854003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46892, 0.02517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46892, 0.46901]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46892, 0.92653]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4226, false], [4226, 8467, null], [8467, 14247, null], [14247, 19495, null], [19495, 24996, null], [24996, 29697, null], [29697, 34228, null], [34228, 37065, null], [37065, 42430, null], [42430, 46892, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4226, true], [4226, 8467, null], [8467, 14247, null], [14247, 19495, null], [19495, 24996, null], [24996, 29697, null], [29697, 34228, null], [34228, 37065, null], [37065, 42430, null], [42430, 46892, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46892, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46892, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4226, 2], [4226, 8467, 3], [8467, 14247, 4], [14247, 19495, 5], [19495, 24996, 6], [24996, 29697, 7], [29697, 34228, 8], [34228, 37065, 9], [37065, 42430, 10], [42430, 46892, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46892, 0.17225]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
94ed3f3c0e4fe0e240952d20301d2a268854a644
|
1983
The String-to-String Correction Problem with Block Moves
Walter F. Tichy
Report Number:
83-459
The String-to-String Correction Problem with Block Moves
Walter F. Tichy
Purdue University
Department of Computer Science
West Lafayette, IN 47907
CSD-TR 459
ABSTRACT
The string-to-string correction problem is to find a minimal sequence of edit operations for changing a given string into another given string. Extant algorithms compute a Longest Common Subsequence (LCS) of the two strings and then regard the characters not included in the LCS as the differences. However, an LCS does not necessarily include all possible matches, and therefore does not produce the shortest edit sequence.
We present an algorithm which produces the shortest edit sequence transforming one string into another. The algorithm is optimal in the sense that it generates a minimal, covering set of common substrings of one string with respect to the other.
Two runtime improvements of the basic algorithm are also presented. Runtime and space requirements of the improved algorithms are comparable to LCS algorithms.
Categories and Subject Descriptors: D.2.2 [Software Engineering]: Tools and Techniques—programmer workbench, software libraries; D.2.6 [Software Engineering]: Programming Environments; D.2.7 [Software Engineering]: Distribution and Maintenance—version control
General Terms: Algorithms
Additional Key Words and Phrases: String-to-string correction, block moves, deltas, differences, source control, revision control
October 26, 1983
The String-to-String Correction Problem with Block Moves
Walter F. Tichy
Purdue University
Department of Computer Science
West Lafayette, IN 47907
CSD-TR 459
Introduction
The string-to-string correction problem is to find a minimal sequence of edit operations for changing a given string into another given string. The length of the edit sequence is a measure of the differences between the two strings. Programs for determining differences in this manner are useful in the following situations.
(1) Difference programs help determine how versions of text files differ. For instance, computing the differences between revisions of a software module helps a programmer trace the evolution of the module during maintenance[6], or helps create test cases for exercising changed portions of the module. Another application is the automatic generation of change bars for new editions of manuals and other documents.
(2) Frequently revised documents like programs and graphics are stored most economically as a set of differences relative to a base version[10,12]. Since the changes are usually small and typically occupy less than 10% of the space needed for a complete copy[10], difference techniques can store the equivalent of about 11 revisions in less space than would be required for saving 2 revisions (one original and one backup copy) in cleartext.
(3) Changes to programs and other data are most economically distributed as "update decks" or "deltas", which are edit sequences that transform the old version of a data object into the new one. This approach is used in software distribution. A related application can be found in screen editors and
This work was supported in part by the National Science Foundation under grant MCS-8108513.
graphics packages. These programs update display screens efficiently by computing the difference between the old and new screen contents, and then transmitting only the changes to the display[2].
(4) In genetics, difference algorithms compare long molecules consisting of nucleotides or amino acids. The differences provide a measure of the relationship between types of organisms[11].
Most of the existing programs for computing differences are based on algorithms that determine a Longest Common Subsequence (LCS). An LCS has a simple and elegant definition, and algorithms for computing an LCS have received some attention in the literature[13, 4, 6, 7, 5, 9]. An LCS of two strings is one of the longest subsequences that can be obtained by deleting zero or more symbols from each of the two given strings. For example, the longest common subsequence of \textit{shanghai} and \textit{sakhalin} is \textit{sahai}. Once an LCS has been obtained, all symbols that are not included in it are considered differences. A simultaneous scan of the two strings and the LCS isolates those symbols quickly. For example, the following edit script, based on the LCS \textit{sahai}, would construct the target string \textit{sakhalin} from \textit{shanghai}.
\begin{verbatim}
M 0,1
M 2,1
A "k"
M 5,2
A "I"
M 7,1
A "n"
\end{verbatim}
An edit-command of the form $M_{p,1}$, called a move, appends the substring $S[p, \ldots, p+t-1]$ of source string $S$ to the target string, and an add command of the form $A\ w$ appends the string $w$ to the target string. In the above example, the edit script takes up much more space than the target string, and none of the savings mentioned earlier are realized. In practical cases, however, the common subsequence is not as fragmented, and a single move command covers a long substring. In addition, if this technique is applied to text, one usually chooses full text lines rather than single characters as the atomic symbols. Consequently, the storage space required for a move is negligible compared to the that of an add command, and it is worth minimizing the occurrence of the add commands. Note that in the above example, the last add command could be replaced with a move, since the symbol $n$ appears in both strings.
Unfortunately, the definition of an LCS is such that the \( n \) cannot be included in the LCS. The algorithm presented below does not omit such matches.
Problem Statement
Given 2 strings \( S=S[0, \ldots, n], n \geq 0 \) and \( T=T[0, \ldots, m], m \geq 0 \), a **block move** is a triple \( (p, q, l) \) such that \( S[p, \ldots, p+l-1] = T[q, \ldots, q+l-1] \) \((0 \leq p \leq n-l+1, 0 \leq q \leq m-l+1, l > 0)\). Thus, a block move represents a non-empty, common substring of \( S \) and \( T \) with length \( l \), starting at position \( p \) in \( S \) and position \( q \) in \( T \). A **covering set of \( T \) with respect to \( S \)**, denoted by \( \delta_S(T) \), is a set of block moves, such that every symbol \( T[i] \) that also appears in \( S \) is included in exactly one block move. For example, a covering set of \( T=\text{abcab} \) with respect to \( S=\text{abda} \) is \( \{(0,0,2),(0,3,2)\} \). A trivial covering set consists of block moves of length 1, one for each symbol \( T[i] \) that appears in \( S \).
The problem is to find a **minimal** covering set, \( \Delta_S(T) \), such that \( |\Delta_S(T)| \leq |\delta_S(T)| \) for all covering sets \( \delta_S(T) \). The coverage property of \( \Delta_S(T) \) assures that all possible matches are included, and the minimality constraint makes the set of block moves (and therefore the edit script) as small as possible.
Because of the coverage property, it is apparent that \( \Delta_S(T) \) includes the LCS of \( S \) and \( T \). (Consider the concatenation of the substrings \( T[q_j, \ldots, q_j+l_j-1] \), where \( (p_j, q_j, l_j) \) is a block move of \( \Delta_S(T) \), and the substrings are concatenated in order of increasing \( q_j \).) The minimality constraint assures that the LCS cannot provide a better "parcelling" of the block moves.
False Starts
Before presenting the solution, it is useful to consider several more or less obvious approaches, all of which fail. The first approach is to use the LCS. As we have seen, an LCS has the property of not necessarily generating a covering set of block moves. For example, the following two pairs of strings have the LCS \( \text{abc} \), which does not include the (moved) common substring \( \text{de} \) nor the (repeated)
common substring $abc$. The LCS match is shown on the left, $\Delta_3(T)$ on the right.
- $S = abcd\text{e}$
- $T = debc$
- $S = ab\text{c}$
- $T = abcabc$
Heckel\cite{3} pointed out similar problems with LCS techniques and proposed a linear-time algorithm to detect block moves. The algorithm performs adequately if there are few duplicate symbols in the strings. However, the algorithm gives poor results otherwise. For example, given the two strings $aab\text{b}$ and $bbaa$, Heckel’s algorithm fails to discover any common substring.
An improvement of the LCS approach is to apply the LCS extraction iteratively. For instance, after finding the initial LCS in the above examples, one could remove it from the target string $T$ and recompute the LCS. This process is repeated until only an LCS of length 0 remains. The iterative LCS strategy succeeds in finding a covering set, but not necessarily the minimal one. The following example illustrates.
- $S = abcd\text{e}a$
- $T = cd\text{ab}$
Assuming again that $S$ is the source string and $T$ is the target string, the left diagram shows the match obtained via an iterative LCS algorithm. The first LCS is $\text{cda}$, the second one is $b$. Since $\text{cda}$ is not a substring of $S$, we obtain a total of 3 block moves. The minimal covering set, shown to the right, consists of 2 block moves.
Another tack is to search for the longest common substring rather than the longest common subsequence*. Computing the longest common substring iteratively results in a covering set, but again not necessarily a minimal one. Con-
* Recall that a subsequence may have gaps, a substring may not.
Consider the following example.
\[ S = \quad a \ b \ c \ d \ e \ f \ d e \ a \ b \]
\[ T = \quad c \ d \ e \ a \ b \ c \]
The left diagram shows the block moves obtained by searching repeatedly for the longest common substring of \( S \) and \( T \). The result is a set of 3 block moves, although 2 are minimal. Searching for the longest common substring is too "greedy" a method, since it may mask better matches.
**Basic Algorithm**
A surprisingly simple algorithm does the job. Start at the left end of the target string \( T \), and try to find prefixes of \( T \) in \( S \). If no prefix of \( T \) occurs in \( S \), remove the first symbol from \( T \) and start over. If there are prefixes, choose the longest one and record it as a block move. Then remove the matched prefix from \( T \) and try to match a longest prefix of the remaining tail of \( T \), again starting at the beginning of \( S \). This process continues until \( T \) is exhausted. The recorded block moves constitute a \( \Delta_S(T) \), a minimal covering set of block moves of \( T \) with respect to \( S \), as will be shown later. The following example illustrates several steps in the execution of the algorithm. The string to the right of the vertical bar is the unprocessed tail of \( T \).
**Step 1:**
\[ S = \quad u \ v \ w \ u \ v \ w \ x \ y \]
\[ T = \quad j \ u \ v \ w \ x \ w \ u \]
longest block move starting with \( T[0] \) : none
**Step 2:**
\[ S = \quad u \ v \ w \ u \ v \ w \ x \ y \]
\[ T = \quad j \ u \ v \ w \ x \ w \ u \]
longest block move starting with \( T[1] \) : (3,1,4)
**Step 3:**
\[ S = \quad u \ v \ w \ u \ v \ w \ x \ y \]
\[ T = \quad j \ u \ v \ w \ x \ w \ u \]
longest block move starting with \( T[5] \) : (2,5,2)
In step 1, we search for a prefix of $T[0, \ldots, 6]$ in $S[0, \ldots, 7]$. Since there is none, we search for a prefix of $T[1, \ldots, 6]$ in the next step. This time we find 2 matches, and choose the longer one, starting with $S[4]$. In step 3, we search for a prefix of $T[5, \ldots, 6]$ in $S[0, \ldots, 7]$, and find the longest one at $S[2]$, length 2. Now $T$ is exhausted and the algorithm stops. Note that in each step we start at the left end of $S$ in order to consider all possible matches.
The algorithm is presented below. Let us assume that the source string is stored in an array $S[0, \ldots, m]$, and the target string in $T[0, \ldots, n]$. $T[q]$ is the first symbol of the unmatched tail of $T$; $q$ is initially zero. The first refinement of the algorithm is now as follows.
```plaintext
q := 0;
while q<=n do
begin
1: find p and l such that $(p,q,l)$ is a maximal block move
if $l>0$ then print(p,q,l);
q := q+Max(l,l)
end
```
Implementing the statement labelled $L$ is simple. Search $S$ from left to right for a longest possible prefix of $T[q, \ldots, n]$. Note that the search can terminate as soon as there are fewer than $l+1$ symbols left in $S$, assuming that $l$ is the length of the maximal block move found in the current iteration. Similarly, there is no possibility of finding a longer block move if the last one included $T[n]$. (We use `and then` as the conditional logical AND operator.)
```plaintext
L:
1 := 0; p := 0; pCur := 0;
while pCur+l <= m and q+l <= n do
begin
\text{Determine length of match between } S[pCur,\ldots] \text{ and } T[q,\ldots] \}
ICur := 0;
\text{while } (pCur+ICur <= m) \text{ and } (q+ICur<n) and then (S[pCur+ICur] = T[q+ICur])
\text{do } ICur := ICur+1;
if ICur > l then
begin
\text{new maximum found}
l := ICur; p := pCur
end;
pCur := pCur+l
end
```
The runtime of this algorithm is bounded by $mn$, and the space requirements are $m+n$. We now show that this algorithm finds $\Delta_S(T)$. Clearly, the set of block moves printed is a covering set, because each symbol in $T$ that is not
included in some block move is (unsuccessfully) matched against each symbol in $S$. To see that the covering set is minimal, consider $T$ below, with the matching produced by our algorithm denoted as follows. Substrings included in a block move are bracketed by "(" and ")". Substrings of symbols excluded from any block move are denoted by $X$.
$$\cdots X (\cdots) X (\cdots)(\cdots) X (\cdots)(\cdots) X \cdots$$
Suppose there is a $\delta' P(T)$ with fewer block moves than the set generated by our algorithm. Clearly, the substrings denoted by $X$ cannot be part of $\delta' P(T)$, because our algorithm does produce a covering set. We can therefore exclude all unmatched substrings from consideration, and concentrate on individual sequences of contiguous block moves.
Now consider block moves that are contiguous in $T$. The only way to obtain a smaller covering set is to find a sequence of $k > 1$ contiguous block moves and to "repackage" them into a covering set of fewer moves. We will show by induction on the number of contiguous block moves that the set produced by our algorithm is minimal.
Suppose we have $k \geq 1$ contiguous block moves generated by our algorithm. This means that we have $k$ triples $(p_i, q_i, l_i)$, $(1 \leq i \leq k)$ satisfying the following conditions.
$$A_t: 1 \leq i \leq k \quad T[q_i, \ldots, q_i + l_i - 1] = S[p_i, \ldots, p_i + l_i - 1] \quad (\ast)$$
$$A_t: 1 \leq i \leq k, \quad A_p: 0 \leq p \leq m - l_i, \quad T[q_i, \ldots, q_i + l_i] \neq S[p, \ldots, p + l_i] \quad (\ast\ast)$$
$$A_t: 1 \leq i < k \quad T[q_i + l_i] = T[q_{i+1}] \quad (\ast\ast\ast)$$
The first condition is just the definition of a block move. The second condition assures that each block move starting at $T[q_i]$ is maximal. The third condition means that the block moves are contiguous in $T$.
We need to show that for any set of of $k$ block moves satisfying (\ast) to (\ast\ast\ast), any equivalent set has at least $k$ block moves. Actually, it is convenient to prove something slightly more general: For any set of $k$ block moves satisfying (\ast) to (\ast\ast\ast), any set which covers the first $k-1$ block moves and a non-empty prefix of block move $k$ has at least $k$ block moves. First, assume $k = 1$. Clearly, we cannot split any non-empty prefix of a single block move into less than 1 covering block move. Now assume that $k > 1$, and that all sets covering the first $k-2$ block
moves and any non-empty prefix of block move $k-1$ consist of at least $k-1$ block moves. Consider what we can do with non-empty prefixes of the $k$th block move. There are two cases. The first case applies to sets that cover the original block move $k-1$ with a single move $B$. In this case, let $B = (p_b, q_b, l_b)$, where $p_b \leq p_{k-1}$, and $p_b + l_b = p_{k-1} + l_{k-1}$. By the induction hypothesis, $B$ is at least the $k-1$st move in the equivalent set. It is impossible to append a non-empty prefix of move $k$ to $B$ since that would contradict (**). Thus we need at least $k$ moves for covering the original $k-1$ moves and a non-empty prefix of original move $k$. The second case applies to sets that split the original block move $k-1$ into at least 2 non-empty moves (see the diagram below).
The only choice to reduce the number of block moves below $k$ is to coalesce the suffix of the original move $k-1$ with a non-empty prefix of move $k$. This new parcelling leaves us with (a) a set covering the original $k-2$ block moves and a non-empty prefix of block move $k-1$, (b) a new coalesced move covering a suffix of move $k-1$ and a prefix of $k$, and (c) another block move if the suffix of move $k$ is not empty. By the induction hypothesis, we know that (a) has at least $k-1$ moves. Add to that the (non-empty) coalesced move, and we end up with at least $k$ moves for covering the first $k-1$ block moves and any non-empty prefix of move $k$. Thus, any set equivalent to the block moves generated by our algorithm has at least $k$ elements. QED.
**First Improvement of the Basic Algorithm**
Consider a situation where the source string $S$ has few replicated symbols. That is, $\alpha$, the size of the alphabet of $S$, is approximately equal to $m$. In this case, a significant improvement of the basic algorithm is possible. During a single scan of $S$, we prepare an index that, for each symbol $s$ in the alphabet, lists the positions of all occurrences of $s$ in $S$. In the basic algorithm, we replace the statement labelled $F$ with the following. Assume $T[q] = s$ is the first symbol of the unmatched tail of $T$. Look up the list $L$ of occurrences of symbol $s$ in $S$.
<table>
<thead>
<tr>
<th>orig. block move no.</th>
<th>$k-2$</th>
<th>$k-1$</th>
<th>$k$</th>
</tr>
</thead>
<tbody>
<tr>
<td>orig. set</td>
<td>......</td>
<td>( ...</td>
<td>( ...</td>
</tr>
<tr>
<td>$\delta_s(T)$ covering $k-1$</td>
<td>......</td>
<td>( ...</td>
<td>( ...</td>
</tr>
<tr>
<td>$\delta''_s(T)$ covering $k$</td>
<td>......</td>
<td>( ...</td>
<td>( ...</td>
</tr>
</tbody>
</table>
-8-
using the above index. If the list is empty, no match is possible. Otherwise, find the maximal block move among those starting with the elements of ℓ in S.
The performance of this algorithm is as follows. Assume the average length of a block move is ℓ. Then the maximal block move must be selected among \( m/α \) alternatives, at a cost of not more than \( ℓ + 1 \) comparisons each. Thus, the runtime of the algorithm is \( O(ℓ*(m/α)*(n/ℓ)) = O(mn/α) \). If \( m ≈ n \), we obtain a nearly linear algorithm.
Program text and prose have the property of few repeated lines. In program text, the only repeated lines should be empty or consist of bracketing symbols like `begin` and `end`; for all other repetitions one would normally write a subprogram. In prose text, the only repeated lines should be empty or contain formatting commands. In applying our algorithm to prose or program text, it is therefore appropriate to choose lines as the atomic symbols. To speed up comparisons, the program should use hashcodes for lines of text rather than performing character-by-character comparisons.
We implemented a program incorporating these ideas, called `bdiff`, and compared it with `diff[6]`, which uses an LCS algorithm. We executed both programs on 1400 pairs of files. Each pair consisted of 2 successive revisions of text, deposited in a data base maintained by the Revision Control System[12]. This system stores multiple revisions of text files as differences. Almost all of the sample files contained program text. We observed that `diff` and `bdiff` execute with similar speeds, but that `bdiff` produces deltas that are, on the average, only about 7% smaller. Apparently, block moves and duplicate lines in program text are not frequent enough to obtain significant space savings over LCS algorithms. We expect that the situation is more advantageous for block moves in the other applications mentioned in the introduction.
Second Improvement of the Basic Algorithm
A different improvement speeds up our basic algorithm even if the source string contains numerous duplicated symbols. The improvement involves an adaptation of the Knuth-Morris-Pratt string matching algorithm[8], which allows a pattern of length ℓ to be found in a string of length m in \( O(m+ℓ) \) steps. Thus, if \( S \) is of length \( m \), \( T \) is of length \( n \), and the average block move is of length \( ℓ \), our algorithm should operate in \( O((m+ℓ)*(n/ℓ)) = O(mn/ℓ) \) steps. Note that the ratio \( m/ℓ \) is a measure of the "difference" of \( S \) and \( T \), and that the runtime
of the algorithm is proportional to that ratio. Note also that this measure is
independent of the permutation of the common substrings in $T$ with respect to
$S$.
An important element in the Knuth-Morris-Pratt algorithm is an auxiliary
array $N$ which indicates how far to shift a partially matched pattern or block
move after a mismatch. The array $N$ is as long as the pattern, and is precom
puted before the match. Precomputing $N$ poses a problem for our algorithm.
Since we do not know how long a block move is going to be, we would have to
precompute $N$ for the entire unprocessed tail of $T$, although we would normally
use only a small portion of it. Fortunately, $N$ can also be computed incremen
tally. The outline of the adapted pattern matching algorithm is as follows.
Assume the next unmatched symbol is $T[q]$. Start by initializing $N[q]$ and
apply the Knuth-Morris-Pratt algorithm to find the first occurrence of $T[q]$.
(Note that this is a pattern of length 1.) If this pattern cannot be found, there is
no block move including $T[q]$. Otherwise, expand the pattern by 1, compute the
next entry in $N$, and reapply the Knuth-Morris-Pratt algorithm to find the first
occurrence of the expanded pattern. Start the search with the previous match.
Continue this process, until the pattern reaches a length for which there is no
match. At that point, the previous match is the maximal block move.
Suppose the maximal common block move starting with $T[q]$ is $l$. The last
attempted pattern match is therefore of length $l+1$, and fails. The incremental
computation of the entries $N[q, \ldots, q+l+1]$ at a total cost proportional to $l$
assures that the cost of the average match remains $O(m+l)$.
The detailed program is given in the appendix. It is useful for applications
(3) and (4) mentioned in the introduction. The idea of incrementally computing
auxiliary data structures can also be applied to the Boyer-Moore pattern match
ing algorithm[1], resulting in a program that runs even faster on the average.
**Reconstructing the Target String**
An edit script that reconstructs target string $T$ from source string $S$ is a
sequence of move and add commands. The commands build a string $T'$ left-to
right. Each block move $(p,q,l)$ in $\Delta_S(T')$ is represented by a command of the
form $M(p,l)$, which copies the string $S[p, \ldots, p+l-1]$ to the end of the string
$T'$. For any substring $T[u, \ldots, v]$ consisting entirely of symbols that do not
occur in $S$, the edit script contains the command $A T[u,\ldots,v]$, which simply
appends the unmatchable substring to \( T \). After completion of all edit commands, \( T = T' \).
In general, \( T \) cannot be constructed in a single pass over \( S \), because block moves may cross (cf. examples in Sect. 3). If \( S \) is a sequential file, one can minimize the number of rewind operations caused by crossing block moves as follows. During the generation of the edit script, it does not matter which one of 2 or more equivalent block moves is chosen. For example, suppose we have the following equivalent, maximal block moves starting with \( T[q] \): \( B1 = (p_1,q,l) \) and \( B2 = (p_2,q,l) \), with \( p_1 < p_2 \). If the previous block move emitted had its \( S \)-endpoint between \( S[p_1] \) and \( S[p_2] \), choosing the block move \( B2 \) saves one rewind operation for \( S \). Our algorithms are easily modified to accommodate this idea. Rather than starting at the left end of \( S \) while searching for the longest possible match, they must start with the endpoint of the previous match and “wrap around” at the end of \( S \).
So far, we have presented our edit scripts as constructing \( T \) separately from \( S \). It is also possible to transform \( S \) “in place”. The following paragraphs discuss the algorithm in some detail.
Suppose we have a buffer \( B[0, \ldots, \text{Max}(m,n)] \) initialized to \( S \), i.e., \( B[i] = S[i] \) for \( 0 \leq i \leq n \). The goal is to transform the contents of \( B \) to \( T \). The key to this algorithm is an auxiliary array \( A[0, \ldots, n] \), which keeps track of the positions of the original symbols \( S[i] \) in \( B \). Initially, \( A[i] = i \) for \( 0 \leq i \leq n \). A marker \( h \) moves through \( A \) from left to right, giving the index of the rightmost symbol involved in a block move so far. Thus, for the \( k \)th move command \( M p, t, k \), \( h = \text{Max}(p_j+l_j, 0 \leq j \leq k) \). There is also a marker \( t \) indicating the index of the last symbol processed in \( B \).
The first step is to remove all symbols from \( B \) which are not in \( T \). This step preprocesses the edit script to isolate the symbols to be deleted, and then actually removes them from \( B \). It also updates the mapping array \( A \) to reflect the compression, and marks those entries of \( A \) as undefined whose counterparts in \( B \) were deleted. The second step processes the edit commands in sequence. An add command simply inserts the given string to the right of \( t \), and resets \( t \) to point to the last symbol so inserted. It also updates the array \( A \) for the symbols shifted right by the insertion. For each move of the form \( M p, l \), compare \( p \) and the current value of \( h \). If \( p > h \), then the current block move is to the right of the previous one. The symbols between \( h \) and \( p \), i.e., \( B[A[h+1], \ldots, A[p-1]] \).
are not included in the current move, but will be moved later. Mark them as
such and set \( h \) to \( p+l-1 \) and \( t \) to \( A[h] \). Thus, the characters \( S[p, \ldots ,p+l-1] \)
will be included in the result. Otherwise, if \( p \leq h \), the current block move crosses
the previous one, and a substring located before \( t \) must be moved or copied for
ward. All symbols in that string that were marked for moving by an earlier com
mmand are now moved, the others are simply copied forward. It is conceivable
that the the current block move involves symbols to the left and right of \( h \). In
that case, first handle the string to the left of \( h \) by moving or copying elements
of the string \( B[A[p], \ldots ,A[\min(p+l-1,h)] \] \) after \( B[t] \). The remaining (possibly
empty) string \( A[h+1, \ldots ,p+l-1] \) is simply included by setting \( h \) to
\( \max(p+l-1,h) \). Update \( A \) to reflect the moves and shifts, and set \( t \) to \( A[h] \).
Below is a trace of the algorithm, transforming the string \textit{shanghai} to
\textit{sakhalin} by applying the edit script \textit{M}0,1; \textit{M}2,1; \textit{A}"k"; \textit{M}1,2; \textit{A}"I"; \textit{M}7,1; \textit{M}3,1.
The algorithm can be applied to update display screens efficiently, provided the
display offers operations for character and line insertion and deletion, as well as
a \textit{copy/move} feature. The latter feature is needed for copying and moving
character strings forward in the above algorithm. The auxiliary array \( A \) is allo
cated in main memory.
After removing
unused symbols
After applying
M 0,1; M 2,1
After applying
I "k"
After applying
M 1,2; I "k"
After applying
M 7,1; M 3,1
Conclusions
The original string-to-string correction problem as formulated in [13] permitted the editing commands add, delete, and change. Clearly, a change command can be simulated with a delete followed by an add. Any sequence of add and delete commands can be transformed into an equivalent sequence of add and move commands. This transformation works since delete and move commands complement each other, provided no block moves cross or overlap. Our approach of extending the editing commands by permitting crossing block moves results in shorter edit sequences. We developed efficient algorithms for computing those sequences. Reconstructing the target string by applying the edit sequence is efficient if the source string can be accessed randomly.
Appendix: Using the Knuth-Morris-Pratt Pattern Matching Algorithm.
\( S \): array\([0..m]\) of symbol;
\( T \): array\([0..n]\) of symbol;
\( N \): array\([0..n]\) of symbol;
\( q := 0; \) \{ start at left end of \( T \) \}
while \( q <= n \) do
begin
\{ Characters left in \( T \); find longest match starting with \( T[q] \) \}
\( k := 0; \) \{ start match at left end of \( S \) \}
\( j := q; \) \{ first symbol of pattern \}
\( \text{last} := q; \) \{ last symbol of pattern \}
\( N[q] := q-1; \) \{ initialize \( N[q] \) \}
\( iN := q-1; \) \{ initialize computation of \( N[q+1, \ldots] \) \}
loop \{ loop with exit from the middle \}
\{ try to find a match for \( T[q]..T[\text{last}] \) \}
\{ \( T[q]..T[\text{last}-1] \) has already been matched \}
\( kOld := k; \) \{ save last point of old match, if any \}
while \( (j<=\text{last}) \) and \( (k<=m) \) do
begin
\{ found match; now increase last and compute \( N[\text{last}] \) \}
while \( (iN>=q) \) and \( (S[k] <> T[j]) \)
do \( iN := N[iN]; \)
\( \text{last} := \text{last}+1; iN := iN+1; \)
if \( T[\text{last}] = T[iN] \)
then \( N[\text{last}] := N[iN]; \)
else \( N[\text{last}] := iN; \)
end
| end of loop |
\{ print match \}
if \( j>\text{last} \) then
begin \{ found match for tail of \( T \) \}
print(\( k-(n-q+1), q, n-q+1 \));
\( q := n+1; \)
end else if \( q = \text{last} \) then
begin \{ no match \}
\( q := q+1; \)
end else
begin \{ last match failed; take previous one \}
print(\( kOld-(\text{last}-q), q, \text{last}-q \))
\( q := \text{last}; \)
end
end
References
|
{"Source-Url": "http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1377&context=cstech", "len_cl100k_base": 8223, "olmocr-version": "0.1.42", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42844, "total-output-tokens": 9943, "length": "2e13", "weborganizer": {"__label__adult": 0.0002810955047607422, "__label__art_design": 0.000255584716796875, "__label__crime_law": 0.00029349327087402344, "__label__education_jobs": 0.0005826950073242188, "__label__entertainment": 7.164478302001953e-05, "__label__fashion_beauty": 0.00012922286987304688, "__label__finance_business": 0.0001906156539916992, "__label__food_dining": 0.0002903938293457031, "__label__games": 0.0005021095275878906, "__label__hardware": 0.0011119842529296875, "__label__health": 0.0003724098205566406, "__label__history": 0.00018024444580078125, "__label__home_hobbies": 9.1552734375e-05, "__label__industrial": 0.0003466606140136719, "__label__literature": 0.0002803802490234375, "__label__politics": 0.00019240379333496096, "__label__religion": 0.0003740787506103515, "__label__science_tech": 0.0345458984375, "__label__social_life": 7.253885269165039e-05, "__label__software": 0.01035308837890625, "__label__software_dev": 0.94873046875, "__label__sports_fitness": 0.00024187564849853516, "__label__transportation": 0.00031375885009765625, "__label__travel": 0.00013685226440429688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32337, 0.03498]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32337, 0.66893]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32337, 0.87659]], "google_gemma-3-12b-it_contains_pii": [[0, 103, false], [103, 1544, null], [1544, 3296, null], [3296, 5553, null], [5553, 7834, null], [7834, 9487, null], [9487, 11236, null], [11236, 13333, null], [13333, 15770, null], [15770, 18257, null], [18257, 20841, null], [20841, 23412, null], [23412, 26308, null], [26308, 27873, null], [27873, 28037, null], [28037, 28794, null], [28794, 30335, null], [30335, 32337, null]], "google_gemma-3-12b-it_is_public_document": [[0, 103, true], [103, 1544, null], [1544, 3296, null], [3296, 5553, null], [5553, 7834, null], [7834, 9487, null], [9487, 11236, null], [11236, 13333, null], [13333, 15770, null], [15770, 18257, null], [18257, 20841, null], [20841, 23412, null], [23412, 26308, null], [26308, 27873, null], [27873, 28037, null], [28037, 28794, null], [28794, 30335, null], [30335, 32337, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32337, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32337, null]], "pdf_page_numbers": [[0, 103, 1], [103, 1544, 2], [1544, 3296, 3], [3296, 5553, 4], [5553, 7834, 5], [7834, 9487, 6], [9487, 11236, 7], [11236, 13333, 8], [13333, 15770, 9], [15770, 18257, 10], [18257, 20841, 11], [20841, 23412, 12], [23412, 26308, 13], [26308, 27873, 14], [27873, 28037, 15], [28037, 28794, 16], [28794, 30335, 17], [30335, 32337, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32337, 0.02281]]}
|
olmocr_science_pdfs
|
2024-11-22
|
2024-11-22
|
79b6b5af30f78a43515b3bc06c5941c44d77e211
|
Fragment level Phong illumination
Introduction
Phong illumination really isn’t something new. The Phong illumination model has been around for almost three decades now. First introduced by Phong Bui-Tuong in 1975 this model is still frequently used both in the offline rendering world and the real-time graphics world. Due to the complex math behind the model it has up until recently only been used for vertex lighting in the real-time rendering world. Both the Direct3D and OpenGL illumination models closely follow the Phong model with some small variation. Doing it on a vertex level often causes visible artifacts and less than convincing look unless you use a very high tessellation. With advances like the dot3 operation in the fixed function pipeline we got a step closer to getting lighting on a per-pixel level. Unfortunately, the limitations of the fragment processing pipeline meant a lot of compromises had to be done, even in DirectX 8 level pixel shaders. With limited range of [-1,1], or the [-8,8] in PS 1.4, and with the limited precision the DirectX 8 level graphic cards offers much of the required math is simply not possible to do. Further, the fact that there’s no advanced math instruction in these graphics solution is another obstacle on our way towards advanced lighting, not to mention the instruction limit. For these reasons, tricks like packing attenuation into a 3d texture, using cubemaps for normalization and using textures as lookup tables for exponentiation of the specular component has been the norm for the past generation.
Fortunately, this will sooner or later be nothing but a bad memory of the past. With DirectX 9 level hardware we not only have the close to infinite range of floating point components and much higher precision, we are also able to do advanced math and have a whole lot of more instructions to play with before reaching the hardware limits. This means that we for the first time ever are able to truly evaluate the Phong illumination model for each pixel completely in a pixel shader. I will state however at this point that even though we are finally able to evaluate the whole Phong illumination model in the pixel shader there are still considerations and limitation that need to be addressed. The number one consideration one need to take into account is of course performance. Even with the top high-end graphics cards of today the full equation can be quite demanding on the fragment pipeline and if care is not taken performance will suffer. We’ll address some of these issues later on in this article.
The Phong illumination model
Let me start by introducing the Phong illumination model:
\[ I = A_{\text{coeff}} A_{\text{color}} D_{\text{color}} + \sum \left( \text{Att} \cdot L_{\text{color}} \left( D_{\text{coeff}} D_{\text{color}} \left( N \cdot L_i \right) + S_{\text{coeff}} S_{\text{color}} (R \cdot V)^{S_{\exp}} \right) \right) \]
So what does all this do? Let’s consider every component and their purpose. The first component, I, is of course the resulting color or intensity. The other components, A, D and S represent three different attributes of light and are called ambient, diffuse and specular. We’ll begin with diffuse as it’s the most intuitive (though not the simplest) of these. To understand what diffuse lighting is, take a piece of paper and point a light towards it (or just imagine it in your head). The paper may represent a polygon in our little world. When the paper faces the light it’ll receive a lot of light and will look bright white. Now slowly turn the paper around until the edge faces the light instead. As you can see it fades with the angle as the paper face away from the light. This phenomenon is what diffuse lighting represents. The actual math behind this is what we see in the middle of the equation above, \( N \cdot L_i \). N is the normal of the surface and \( L_i \) is the light vector. The light vector is a vector that points from the point we’re lighting towards the light. The light vector should be normalized, that is, being of length 1. The same should of course be true for the normal too. The dot product factor will thus be a number between -1 and 1. We don’t want negative light contribution, so all dot products in this article are assumed to be clamped to the [0...1] range. Why does this expression give us the desired result? Let’s illustrate it with an image:
A dot product between two perpendicular vectors will return 0, that’s the case with light lying in the surface plane in the illustration above. Anything behind the surface will return a negative number and thus be clamped to 0. A light shining perpendicularly towards the surface from above will return 1 and anything lighting the surface from an angle will get a higher contribution as the light vector approaches the surface vector. Quite intuitive, but this is of course no proof of correctness. At this time it’s better to spill the beans, the Phong model isn’t correct. It’s just an approximation of how light tend to behave but nowhere near acceptable for studying optics. However, in graphics we don’t need correctness; our main concern is to please the eyes of human beings. Thus the motto is: if it looks good, then is good. Phong illumination looks good and consequently is good. So that it can’t predict how photons interact with matter is not going to concerns us a whole lot.
If we go back to the equation again you can see that the diffuse contribution is multiplied with two other variables, Dcoeff and Dcolor. Dcolor is the color of the material of the surface, commonly represented by a texture or a constant color. We will use a texture and this texture is the normal base material texture as used in many applications and games and should not need any further introduction. Dcoeff is simply a variable telling how much the diffuse component is going to contribute in whole lighting equation, you’ll notice that there’s also a Acoeff and a Scoeff variable which controls how much ambient and specular we want. For performance one do not necessarily need to care about all of these. In fact, it can be beneficial to just bake the Dcoeff into the base texture. If you want less diffuse contribution you can just use a darker texture, similarly for ambient and specular. So we can do the exact same thing with that in mind and a consequently somewhat simpler expression. Here the components A, D and S have their coefficients and colors pre-baked into single entities.
\[ I = AD + \sum_i \left( Att \cdot L_{\text{color}} \left( D(N \cdot L_i) + S(R \cdot V)^{\text{exp}} \right) \right) \]
**The specular component**
So far we have discussed diffuse lighting only. Diffuse lighting works well for materials like wood, stone, fabric etc. But it won’t work that well for materials like plastic, metal and porcelain. Why not? These materials have a property that for instance rough wood lacks, they are shiny. Shininess is the property that the specular component tries to resemble. For rough wood you could do without any shininess and it would looks pretty good. But even for rough wood a small specular component can enhance the image. There’s this saying in graphics, “if you can’t make it good, make it shiny”. But be careful though, with Phong illumination you can make it good so you shouldn’t need to resort to making it overly shiny. Unless you’re trying to make it look like polished wood you should use a low specular component. The best images are created by carefully balancing specular and diffuse properly according to the properties of these materials in real life.
So how is the specular component calculated? The idea is similar to that of the diffuse component. To begin with, compute the dot product between the reflection vector R and the view vector V. The view vector is similar to the light vector, except that the view vector is the vector from the camera to the lit point rather than from the light to that said point. This reveals a significant property of specular lighting. While diffuse lighting is viewpoint independent specular is by its pure nature very viewpoint dependent. If you navigate around in your little computer graphics world looking it doesn’t matter from where you observe a piece of rough wood, it’ll look the same regardless where you view it from. That’s not true for materials like plastics though, as you move
around you’ll see the reflection of the light in the surface and as the viewpoint changes the reflection will move around. To illustrate the behavior of specular take a look at this picture.

You of course get maximum reflected light if you view the surface from somewhere along the reflection vector. As you move away from this vector you see less and less reflected light. If you were to use the dot product of the view vector and reflection vector the surface still wouldn’t look particularly shiny, rather it would just look bleach. Why is that? Think of a perfectly reflecting surface, a mirror in other words. A mirror will only reflect light in the exact direction of the reflection vector. That is, if you viewed it at a slight angle off from the ideal reflection angle as in the picture above you wouldn’t see any reflected light at all. Thus in that case the dot product alone obviously doesn’t work. Also think of a dull surface. It will reflect light in all possible directions so reflections should be visible from pretty much everywhere, even though you don’t see a sharp reflection but rather just a uniformly lit surface. The difference is of course the spread. The mirror doesn’t have any spread while the dull material has a significant spread. In other words, the more reflecting the material is the faster the light falls off as you move away from the reflection vector. Enter the specular exponent. As you can see in the Phong equation the dot product of the view vector and the reflection vector is raised to a power. This exponent represents the shininess of the material. The higher the exponent the shinier the material is. A specular exponent of infinity is mirror and a specular exponent of 0 is completely dull surface where light is spread equally in all directions. If you didn’t raise the specular to a power, basically using a specular exponent of 1, you still have a pretty dull surface. Normal values of the specular exponent tend to be around 8 – 64. We will use a constant specular exponent of 24, something I choose because it looks pretty good, remember, it if looks good then it is good. With pixel shaders v2.0 nothing really prevents us from changing the shininess of the surface by storing the exponent in a texture and use that as a lookup table for specular exponents for each pixel. This can be used to let rusty parts of metal be non-shining while letting the intact parts shine as appropriate. A dark region in this texture represents a non-shiny area while bright regions are those that are shiny. I’ll leave this as an exercise for the interested however and instead focus on a more important part, by which we can create a quite similar effect, namely gloss.
Gloss is basically just another word for the specular coefficient. As you probably remember we baked the coefficients together with the colors for each of the components, ambient, diffuse and specular. One often leaves the specular color as white which basically reduces the S component to be nothing but the specular coefficient or the gloss. This is because most shiny materials don’t significantly change the color of the light as it reflects off the surface. Some material does though and if you’re going to simulate this behavior you should of course keep the specular color component. Gloss is an important part of the equation however and should generally be left in the equation. It often gives better results to just alter the gloss instead of the specular component across a surface to do effects like rusty parts of a metal surface as mentioned above. So we will use a texture containing the gloss, a so called gloss map. If you want to use a specular color you can bake it into the gloss map, but in our case we will take advantage of the fact that we only have a single property to take care of and use a single channel texture to store our gloss map, which reduces the bandwidth need.
**Attenuation**
In real life light will fade as the lit surface get farther from the light. The falloff is a roughly a $1/r^2$ function (think of the area of a sphere with the light in its center). In real life light sources aren’t really a dimensionless point in space either. A light bulb for instance is while not particularly large still not exactly infinitesimal either.
So if we applied an attenuation factor of \(1 / r^2\) we wouldn’t get very realistic results. To better capture the behavior of light a slightly more complex function is commonly used.
\[
\text{Att} = \frac{1}{c + l \cdot r + q \cdot r^2}
\]
We have constant, linear and quadratic attenuation, that’s \(c, l\) and \(q\) in the formula above. It’s not necessary to use all components; instead I usually drop the linear component since it doesn’t add a whole lot and is the one that places the heaviest load on the fragment pipeline since it requires a square root. Usually it’s enough to just offset the inverse square function with a constant and usually setting this constant to 1 will suit us well. So the attenuation function we will use is
\[
\text{Att} = \frac{1}{1 + q \cdot r^2}
\]
**Ambient**
If you were to implement the lighting equation as we have gone through so far we would get quite good results. However, there’s still something that will hurt the impression of reality. Polygons in our little virtual world that faces away from all our light will be black. This may sound natural as no light would hit it. However, our experience tells us otherwise. If you’re in a decently lit room you’ll have a hard time finding a surface that’s so dark that you can’t see its details and texture. Nothing really gets black. Why is that? When light hits a surface some of it scatters back. Some of that light hits our eyes which is the sole reason we can see anything at all. Not every photon scattering off from a surface will hit the eyes of the viewer though, some will bounce away and hit other surfaces. Some of that light will then once again scatter back into the scene. This is called indirect lighting and is something that unfortunately our Phong model doesn’t take care of. Fortunately, there’s a very cheap way to fake it though. Enter the ambient component. While none of the components of Phong illumination is particularly real or physically correct the ambient is the most fake of them all. In fact, it clearly goes against all our knowledge about light. But as always, if it looks good then it is good. Ambient gets rid of the blackness of unlit surfaces and gives a decent impression that indirect light is present in the scene. This alone is a noble enough goal to motivate its place in the Phong model and given how cheap it is to implement one would really rather instead have to motivate your stance if you were not going to use ambient.
So what is ambient then? Basically it’s nothing but a constant light that hits every surface. One assumes that the light scattered off from the surfaces in the scene is uniformly distributed in all directions and all places. This is hardly close to the truth though, but works reasonably well for most normal scenes. With light hitting a surface uniformly from all directions you get no reflective kind of behavior ala specular. It’s also completely angle independent so anything like diffuse is out of the window too. Basically you end up just the texture color multiplied with a constant of how much ambient you want in the scene; very simple but quite effective. For being so effective and yet so cheap it’s easily the most worthwhile calculation your fragment shader can do.
**Fragment level evaluation**
In real life few surfaces are really flat; this is a painful truth for the graphics artist as it becomes so much harder to create realistic environments given the base primitive of 3D graphics. However, there are solutions and Phong illumination on a fragment level gives you opportunities to ease the burden on the artist without the need for zillions of tiny triangles to simulate rough surfaces. Also, it would be wasteful to do all this work on every pixel without taking advantage of the possibilities this gives you. One could for instance just interpolate the normals and evaluate the Phong equation on each pixel. While this would certainly look better than normal per vertex lighting it would still look flat. Fortunately, the Phong illumination model still has headroom to improve this significantly. The solution is of course instead of just interpolating the normals to store them in a texture and look them up on a per pixel level. This is what’s commonly called a normal map, or bump map. This will let you give surfaces properties that real surfaces tend to have like roughness, bumpiness and fine details. However, this introduces some important issues and the full concept can be a significant threshold for many people to get over. We’ll take it from the beginning though and will study the issues it raises in detail.
So let’s assume that we have a texture with the normals stored. We sample this texture in our pixel shader and do the entire math. Will this work? Those who have tried (for instance me before I understood these issues) can assure you that it’ll look very odd and incorrect. It’ll look ok at some spots but wrong in most other. Well, if the normal map was created exactly for the given direction of a polygon it would work. We can’t create a separate normal map for every direction a texture may be located in our scene. Not only would our dear artist refuse to take on this tremendous job, but even if he did we would bring the graphic card to its knees due to the extreme memory requirements. So this is obviously not an option.
Ideally we would want a base texture, a normal map and a gloss map to go together for each material. This is the solution I’ll come to in the end, so why do I insist that just sampling a texture and do the math requires a separate texture for each given possible direction? Consider a simplest example possible: You are inside a cube. All six faces use the same texture and the same normal map. Now assume we want them all to look flat, so we store a constant normal, say for instance (1, 0, 0) in the normal map. Now applying this to all six faces we’ll get something like in the illustration.
Of course you’d want the normals to into the box, the faces of the cube obviously have different normals and in this case only one face has correct normals. It may seam impossible at first that the faces can share the same normal map given that they are oriented differently and have very different normals. Using a separate normal map may seam to be the only solution. Fortunately, there’s a better solution.
**Tangent space**
To solve the problem we need to introduce the concept of a vector space. Imagine that we removed the axis pointers in the picture above. How would we know which direction is the X direction? We just wouldn’t know! Why? Because the direction of X is nothing but an arbitrary choice we have made. There’s no fundamental truth behind this choice. It’s just a choice as good as any. Imagine that we put X into Y’s position and vice versa. Suddenly the (1, 0, 0) normal would be incorrect for face that it was correct for before. Not only that, but suddenly it’s correct for the face in the bottom of the cube. Now imagine that we used different meanings of X, Y and Z for each face. What would that imply? (1, 0, 0) can be the correct normal for every face, we only need to adjust our coordinate system to suit the normals. This may seam backwards but it is an extremely handy thing to do in graphics.
A vector space is basically just a coordinate system. You have three vectors defining the direction of each major axis. These are the vectors pointing in X, Y and Z direction as defined by that vector space. There are two vector spaces that are important to us right now. First, the standard vector space we place all our objects into. This is called world space. This is the vector space you’ve been using even though you may not have realized that. As we place our objects in absolute coordinates the world space is defined by the vectors (1, 0, 0), (0, 1, 0), (0, 0, 1). The other space that’s important to us is the so called tangent space. It is defined by the tangent vectors and the surface normal. Note that we still need the surface normal even though we have normals stored in the normal map. The difference though is that the surface normal is a normal normal (no pun intended), i.e. it’s defined in world space. The normal map however contains normals in tangent space. To better understand the concept of tangent spaces try to think of a texture quad. The vectors that define this vector space are the ones that point in the direction of the u and v texture coordinate in world space. The normal points perpendicularly right
up from the surface as usual. The picture at the side may help you understand the concept. The tangent space in this picture it thus defined by \((0, 1, 0), (0, 0, -1)\) and \((1, 0, 0)\), since the direction of the U texture coordinate points in the Y direction and V points in the -Z direction and the normal points in X direction.
Alright, now that we have our tangent space, what’s up next? Well, we’ll need to store the tangent space for each vertex in our geometry and pass that along with the vertex and texture coordinates to the vertex shader. The vertex shader needs to perform transform the light vector and reflection vector into tangent space and pass that along to the pixel shader. The pixel shader can then work as usual and use the normal from the normal map.
An obvious question at this point is of course, how do we create the normal map? Unfortunately there’s no general method for creating a normal map from a base texture. Instead the artist needs to create the normal map along with the base texture as two separate but obviously connected entities. It’s quite unintuitive to draw a normal map however; a height map is much intuitive. It’s easier to think of white as high and black as low than it is to think of pink as pointing to the right and light green pointing down etc. Fortunately, there’s a general way of converting a height map into a normal map which can also be done on load time. All you need to do is applying a sobel filter to every pixel. Alright, great, another technical term to learn I suppose many of you are thinking. The concept of a sobel filter is quite simple though. A sobel filter basically finds the slope of a grayscale picture. First you apply the sobel filter in X direction, then in Y direction and form the the vector \((dX, dY, 1)\). Then normalize this vector and you’re done. The filter kernels look like this:
\[
\begin{pmatrix}
-1 & 0 & 1 \\
-2 & 0 & 2 \\
-1 & 0 & 1 \\
\end{pmatrix} \quad \begin{pmatrix}
-1 & -2 & -1 \\
0 & 0 & 0 \\
1 & 2 & 1 \\
\end{pmatrix}
\]
If you’re unfamiliar with the concept for filter kernels, just place the pixel you’re filtering right now in the middle square. Then multiply each pixel that each square covers with the number that’s in that square and sum it all together. The result is your filtered value. So applying the left filter will give you \(dX\) and the right one will give you \(dY\).
**Implementation**
If you’ve read everything to this point I suppose you are getting a little tired of all the theory. So without further ado we’ll dive straight into the implementation. The first thing we need to define is our vertex format. As we’ve concluded earlier in this text the data we need is a vertex position, a texture coordinate and our tangent space. This gives us this vertex format:
```c
struct TexVertex {
Vertex vertex;
float s, t;
Vertex uVec, vVec, normal;
};
```
Now we need to feed this info into the vertex shader. Feeding the vertex and texture coordinates into a vertex shader should be pretty much straightforward. It’s important to note at this time though that texture coordinates no longer need to be in any way related to textures. They are really nothing but generic interpolated properties. So we will feed info into
the vertex shader through texture coordinates and then pass new texture coordinates from the vertex shader into the pixel shader. So the vertex declaration will look like this:
```c
D3DVERTEXELEMENT9 texVertexFormat[] = {
{ 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},
{ 0, 1 * sizeof(Vertex), D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0},
{ 0, 1 * sizeof(Vertex) + 2 * sizeof(float), D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1},
{ 0, 2 * sizeof(Vertex) + 2 * sizeof(float), D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2},
{ 0, 3 * sizeof(Vertex) + 2 * sizeof(float), D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3},
D3DDECL_END()}
};
```
The vertex shader needs to compute the light vector and the view vector from the provided data. Thus we’ll need to provide the vertex shader with the camera position and light position. This is best done with vertex shader constant as these attributes doesn’t change with the geometry in any way. Once the view and light vectors are done we need to transform them into tangent space. The transformation is just a matrix multiplication, which by the way is nothing but a set of dot products. As these are three-dimensional properties we need only do a dp3 operation with each of uVec, vVec and the normal. The resulting vertex shader ends up as something like this:
```c
vs.2.0
dcl_position v0
dcl_texcoord0 v1 // TexCoord
dcl_texcoord1 v2 // uVec
dcl_texcoord2 v3 // vVec
dcl_texcoord3 v4 // normal
// c0-c3 = mvp matrix
// c4 = camera position
// c5 = light position
// Transform position
m4x4 oPos, v0, c0
// Output texcoord
mov oT0, v1
sub r0, c5, v0 // r0 = light vector
dp3 oT1.x, r0, v2
dp3 oT1.y, r0, v3
dp3 oT1.z, r0, v4 // oT1 = light vector in tangent space
sub r1, c4, v0 // r1 = view vector
dp3 oT2.x, r1, v2
dp3 oT2.y, r1, v3
dp3 oT2.z, r1, v4 // oT2 = view vector in tangent space
```
Alright, everything should now be properly setup for the most important piece of code of ours, the pixel shader, which will do all the tough work. As everything is now in tangent space we can just carry on all operations just as if all data, including the normal from the normal map, would have been in world space. The pixel shader will be much longer so we’ll so through it step by step instead of just printing the entire code right here. We’ll start with the diffuse component.
```c
ps.2.0
dcl t0.xy
dcl t1
```
Alright, this should be pretty straightforward. We begin by sampling our base texture and grabbing the normal from the bump map. We could of course have used floating point textures given that normals can have components which range from -1 to 1, but that would reduce performance without a whole lot of image quality improvement. Actually, it would reduce the image quality on current hardware since at the time of this writing no hardware is available that supports filtering on floating point textures. So, instead we take the traditional approach of packing it into a normal D3DFMT_X8R8G8B8 texture. This means we will have to unpack it in our shader though and that’s the mad (as in multiply and add, not crazy) instruction right after the sampling. Note that the linear filter on the normal map isn’t really that suitable for normals, so after the filtering the normal may no longer be of unit length but rather slightly shorter. This may not matter a whole lot for diffuse, but it will matter quite a lot for specular. If the length is 0.99 instead of 1.0 and you raise it to say 24 it’ll not end up with the wanted 1.0 but rather something much lower, 0.99^24 \approx 0.785, which will make our specular highlights significantly less sharp. So the post-filter normalization is certainly needed, even though maybe not this early, but it doesn’t hurt to use a better normal for diffuse too. The normalization process is quite simple. As you may remember from your linear algebra lectures a vector dot-multiplied with itself is the squared length of that vector. So what we do is to take the inverse square root of that squared length, which gives us the inverse of the length. Multiplying the vector with the inverse length and the normalization is done. The same is then done to the light vector. After the normalizations we can just do the dot product between these vectors, multiply that with the base texture and our diffuse is done. Note that we use dp3_sat as opposed to just dp3. This is so that all negative dot products get clamped to zero. We don’t want negative light remember?
So far the output doesn’t look particularly impressive. The most obvious drawback is the lack of attenuation. So far only the angle matters, not how far from the light the surface is. We’ll remedy the problem right away. So we’ll need this piece of code inserted right after the light vector normalization.
```
// c2 = (constant attenuation, quadratic attenuation, unused ...)
mad r5, c2.y, r7, c2.x
rcp r5, r5.x // r5 = attenuation
```
This will give us our wanted attenuation factor which we can multiply with our diffuse to get a properly attenuated light. So the last step in the shader changes as follows.
```
mul r4, r4, r0 // r4 = base * diffuse
mul r4, r4, r5 // r4 = base * diffuse * attenuation
mov oC0, r4
```
Next up is our specular. To begin with we’ll need to sample our gloss map. It has the same texture coordinates as the base texture and normal map so it’s straightforward to add. As you may remember from our vertex shader above we get our view vector in t2. So we’ll normalize as we did with the light vector. We then need to compute
the reflection vector. The reflection vector is given by $2(L \cdot N)N - L$ as illustrated by the image below.
Once the reflection vector is done we basically just need to do the dot product, raise it to a power and multiply with the gloss and we’re done. We’ll add the specular exponent to the first constant. The code ends up something like this:
```
dcl t2
dcl_2d s2
...
def c0, 2.0, 1.0, 24.0, 0.0 // (2.0, 1.0, specular exponent, 0.0)
...
texld r2, t0, s2 // r2 = gloss
...
dp3 r7, t2, t2
rsq r7.w, r7.x
mul r6, t2, r7.w // r6 = normalized view vector
dp3 r7, r3, r1
mul r7, r7, c0.x
mad r3, r7, r1, -r3 // r3 = reflection vector
dp3_sat r3, r3, r6
pow r3, r3.x, c0.z // r3 = specular
mul r3, r3, r2 // r3 = specular * gloss
Given the discussion above there shouldn’t be a whole lot of question marks over this code. Now we just need to combine it with the diffuse component. The last piece of code ends up as follows.
```
mad r4, r4, r0, r3 // r4 = base * diffuse + specular * gloss
mul r4, r4, r5 // r4 *= attenuation
mov oC0, r4
```
The last piece of the equation that remains is now the ambient, which is also the simplest to implement. So without further ado we’ll go right at the task. We’ll need to pass the ambient factor to the shader. There are some unused components in our c2 constant, so we’ll just use one of these. Then we only need to squeeze another instruction into the final combining code.
```
// c2 = (constant attenuation, quadratic attenuation, ambient, unused)
mad r4, r4, r0, r3 // r4 = base * diffuse + specular * gloss
mul r4, r4, r5 // r4 *= attenuation
mad r4, r0, c2.z, r4 // r4 += base * ambient
mov oC0, r4
```
Yes, that’s it. The Phong model is now completed and ready for some serious action.
**Aliasing**
While we already get pretty good results there is still a couple of issue that needs to be addressed however. One such issue is aliasing. You probably already know the reasons why we use techniques like mipmapping. If you don’t have the mathematical background you probably at least know from experience that not using mipmapping will cause severe shimmering artifacts on objects at a distance. Why is that? The mathematical explanation is that it violates the Nyquist-frequency. Now that probably sound like Greek to most people and only those with a signal processing background will be familiar with it. Basically we are stating that the frequency present in the texture is higher than half the sampling rate, which may only confuse you more, but it’s actually a quite easy concept to understand even though it would take a higher degree of mathematical skills to do the reasoning from a mathematical point of view. Assume we are rendering to a resolution of 256x256, a resolution that will hardly ever be used in real life, but for this example it makes it easy to understand the issues. Assume we also have a texture of a 256x256 containing a checkerboard pattern, that is, every other pixel is black and white. Ignoring that we usually have linear filtering it would seem that mapping this texture onto the full screen will work just fine. Every other pixel gets black and white. Now assume we map it to the upper left 128x128 pixels. Only every other pixel from the texture will end up on screen (still ignoring filters), so we get only the black pixels by a seemingly unfortunate bad luck. Obviously information got lost in the process. It’s hard to get something useful in this situation either way, but at least we would want all pixels in the texture to contribute to the final results producing some kind of grey. Alright you say and point out that this is exactly what a linear filter will do for us. True, in this case using a linear filter would be enough, but then consider another checkerboard texture but with each 2x2 pixels being either white or black. Mapping this to either 256x256 or 128x128 will work just fine. Now map it to 64x64 and consider the results. Now we’re back in the same situation again. We will get nothing but black as nothing from the white 2x2 pixel blocks will ever be touched by the linear filter. Obviously information once again got lost. Ideally we would want every 4x4 block in the texture to contribute to each pixel. This is basically what mipmapping does. It tries to match the pixel and texel rates by using smaller down-sampled textures to better fit the spacing between where in the texture each pixel would sample. So when mapping to a 256x256 pixel area the full 256x256 mipmap would be used, while when mapping it to a 64x64 pixel area it would use a 64x64 mipmap. For anything in between it would interpolate between the two closest mipmap levels for a smooth transition. Doing this should effectively get rid of all kind of texture shimmer artifacts related to texture sampling.
Alright, so what’s up with all this theory, the problem is solved, right? Well, I’d love that to be true. Unfortunately it’s not. During the DirectX 7 era one could pretty much state that it was a solved problem, but with the pixel shaders of today we are basically back on square one again. Why? Well, during the DirectX 7 era textures were combined with simple arithmetic operations like modulating a base texture with a lightmap, possibly adding an environment map onto that. Simple arithmetic operations like multiplications and additions don’t change the frequency properties of the texture. So as long as you use these simple operations you’ll be fine. Unfortunately this is not the case with operations like dot products. It basically kicks all the assumptions from the reasoning behind mipmapping out of the window. This means that we’ll once again see shimmering. And since the trend is that multisampling replaces supersampling as the preferred anti-aliasing technique we won’t get any help there either. The situation is however not as horrible as it may first appear. We just need to be aware of the problem and carefully tackle it. While mipmapping may no longer perfectly match our source it certainly helps us a lot. Again, what’s the reason for shimmering? There are too high frequencies in the source material. What can we do about it? Reduce the high frequency components in our textures, or in plain English, use blurrier textures. Important to note here though is that there’s no need to use a blurrier base texture since it will only be part of simple arithmetic operations. Our main target is instead our normal map and to some extent the gloss map. The general advice is to avoid having sharp contrasts in the normal map. You also don’t necessarily need to use the whole 0 to 1 range when creating your height map. Sharp contrasts in the gloss map is generally not desired either. Smoother transitions in the gloss map can help hiding the aliasing artifacts slightly. It’s also noteworthy that a high specular exponent while giving sharper and generally better looking specular highlights also adds to the aliasing; so these two factors needs to be balanced. A good advice is to use a blurrier normal map the higher the specular exponent is. That is, a shiny surface will need a blurrier normal map. A liasing certainly occurs from diffuse too, so you can’t use too sharp normal maps for dull surfaces either. It’s also important to note that the artifacts tend to occur on lower mipmap levels, so it may help to not only just down-sample the previous mipmap level when creating the mipmap chain, but also apply a soft blur filter.
If you work for a game or content creation company it’s important that you make sure the artist understand these issues. Unlike many other issues that can be handled graciously by the programmer this will require
awareness from the artists. The best thing the programmer can do is to educate the artist and provide good tools for previewing the material.
**Shadows**
There’s one thing left that seriously hurts the impression of reality, and that’s the lack of shadows. It would be wasteful to spend all this time implementing Phong illumination and leave it in this state. There are several shadowing techniques to choose from and some of them exist in several different forms. Unfortunately, they all suck in one way or another. The two most common are stencil shadows and shadow mapping. The advantage of stencil shadows is that the shadows are pixel accurate and that stenciling is widely supported. The disadvantage is that it’s slow, not particularly scalable, hard to implement, not very general and may interfere with some anti-aliasing techniques. The advantage of shadow mapping is that it’s reasonably fast, quite scaleable, easy to implement and very general. The disadvantage is that the shadows are prone to aliasing. It has enough pluses though to make it my shadow technique of choice.
The idea behind shadow mapping is simple. In the first pass you render the distance to the light into a texture from the lights point of view. Then in the second pass you check the distance to the light against what’s stored in the texture from pass 1. If the distance is larger than the stored value obviously some other object is in the same line of view that covers the light, which implies that it’s in shadow. Otherwise it’s lit. Quite simple, isn’t it? In our case we use omni-directional lights, so we’ll need to render to a cubemap instead of a normal texture. As we’re only interested in distance and not colors etc we can use a much simpler pass. No textures, just plain geometry. For that we’ll need a pair of simple shaders.
```plaintext
vs.2.0
dcl_position v0
// c0-c3 = mvp matrix
// c5 = light position
// Transform position
m4x4 oPos, v0, c0
sub oT0, c5, v0 // oT0 = light vector
It can’t be simpler, just compute the light vector. No tangent spaces or anything, just a subtraction and we’re done. The pixel shader isn’t any more complex.
ps.2.0
dcl t0
dp3 r0, t0, t0
mov oC0, r0
The dot product with itself gives the squared length of the light vector. Normally one would compare the distances, but the squared distances works just as well and gives a significant speed boost. There is an issue though we need to take care of for this to work well. When comparing with the stored distance there will unavoidably be precision errors due to the finite resolution of our shadow map and limited number of bits. For this reason you need to bias the distance to give some headroom for precision errors. Normally you would just add a small number. However, if we’re using the squared distances this won’t work very well due to the non-linear spacing we have. It would effectively make our bias smaller and smaller with distance and artifacts would soon be visible. If we use a larger bias we would instead get problems with missing shadows close to the light. Unfortunately, there’s no optimal bias in between either, rather we could find biases which causes both artifacts. Instead we’ll take a different approach. We’ll just multiply the distance with a constant slightly less than 1. This will instead define the allowed error in terms of a certain percentage, which will work much better. Only very close up on the light will there be artifacts. If this is a case that matters much to you there’s still the option to use linear distance rather than the squared distance, but at a performance cost of course.
Note that squared distances will return quite large numbers, certainly larger than 1 in general unless we use a very small world, so we’ll need a floating point texture to store it to. We could use a normal fixed point texture
too, but then we’d need to scale it down such that we’ll never get anything larger than 1. We can’t allow clamping as that will destroy our shadows. Also, floating point better suits our quadratic representation of distance. So the best choice for us would be to use a D3DFMT_R32F texture. Note that some pixel shader 2.0 hardware doesn’t support floating point cubemaps though, but otherwise this is an ideal format as it is single channel and floating point with high precision. If you need to support such hardware you’ll be better off just using the linear distance instead though.
To implement shadows we also need to change our lighting shaders. Our dear vertex shader will receive another line.
```
move oT3, -r0 // oT3 = shadow map
```
This line isn’t obvious just by looking at it; instead you must take a look at the old vertex shader and see that r0 will since earlier computations contain the light vector, that is, the light position minus the vertex position. We want to look up in the cubemap in the direction from the light position towards the vertex position, that is, the exact opposite direction of the light vector. So that’s how we come up with –r0. The pixel shader gets more extensive additions. First we need some basic setup and then we sample the shadow map.
```
dcl t3
dcl_cube s3
...
def c1, 0.97, 1.0, 0.0, 0.0 // (biasfactor, averaging factors)
...
texld r8, t3, s3 // r8 = shadow map
```
Then right after we normalize the light vector we’ll squeeze in an instruction to compute the biased distance to the light.
```
mul r8.y, r7.x, c1.x // r8.y = lengthSqr(light vector) * biasfactor
```
We now need to get a shadow factor, that is, 0 if we’re in shadow and 1 otherwise. So we’ll compare and grab a zero or one from our c1 constant depending on the outcome of the comparison.
```
sub r8.x, r8.x, r8.y
cmp r8.x, r8.x, c1.y, c1.z // r8.x = shadow factor
```
Now we only need to multiply this with our diffuse and specular components. The ambient will be left alone though as we want ambient to be visible in shadowed areas too. So the component combining will changed to this.
```
mad r4, r4, r0, r3 // r4 = base * diffuse + specular * gloss
mul r4, r4, r5 // r4 *= attenuation
mul r4, r4, r8.x // r8 *= shadow factor
mad r4, r0, c2.z, r4 // r4 += base * ambient
mov oC0, r4
```
Tada, we have shadows! We could leave it at this and be fairly satisfied. This doesn’t mean however there are no improvements left to be done. Surely enough I have another trick for you. While the shadows created with the above code looks fairly good there is a problem. If the shadow map is of low resolution, say 256x256, we will get pixilation of the shadows. The edges of the shadows have obvious stair-stepping. What can we do about it? Well, we could increase the resolution of our shadow map. This will quickly kill our performance though. Rendering to a 512x512 shadow map requires four times the fillrate of rendering to a 256x256 shadow map. Instead we’ll try to anti-alias our shadows. How can we do that? By taking several samples and average them. So we’ll just take the normal shadow map sampling position and add an arbitrary constant to offset it slightly and take another sample. We’ll take three additional samples for a total of four to get a decent smoothing of the edges. So we’ll need to provide three additional sampling positions from the vertex shader.
```
def c8, 1.0, 2.0, -1.0, 0.0
def c9, 2.0, -1.0, 1.0, 0.0
```
The pixel shader gets its fair share of edits too. The changes are pretty straightforward. First we just sample at the newly provide sample positions.
def c10, -1.0, 1.0, 2.0, 0.0
...
sub oT4, c8, r0
sub oT5, c9, r0
sub oT6, c10, r0
dcl t4
dcl t5
dcl t6
...
texld r9, t4, s3 // r9 = shadow map
texld r10, t5, s3 // r10 = shadow map
texld r11, t6, s3 // r11 = shadow map
...
Then we’ll need to revise the shadow factor calculation slightly. We’ll use 0.25 instead of 1.0 for obvious reasons. We’ll accumulate the results from all sample comparisons in r8.x, so the final combining code remains the same.
def c1, 0.97, 0.25, 0.0, 0.0 // (biasfactor, averaging factors)
...
sub r8.x, r8.x, r8.y
sub r9.x, r9.x, r8.y
sub r10.x, r10.x, r8.y
sub r11.x, r11.x, r8.y
cmp r8.x, r8.x, c1.y, c1.z
cmp r9.x, r9.x, c1.y, c1.z
cmp r10.x, r10.x, c1.y, c1.z
cmp r11.x, r11.x, c1.y, c1.z
add r8.x, r8.x, r9.x
add r8.x, r8.x, r10.x
add r8.x, r8.x, r11.x
And that’s it. We can now view our precious creation with highest pleasure. The shadows should now look much smoother. If we make a close-up though we can still see stair-stepping, more samples would solve that. I’ll leave that as an exercise for the interested.
We’ve come a long way, we have implemented something that at the time of this writing was hardly possible to do in real-time just a year ago. It’s fascinating how far graphics technology has advanced recently, and we’re still moving. As mentioned several times in this article we are still doing lots of things that are hardly real, but as technology goes forward I hope we can overcome these problems too. I hope we can join up some time in the future and implement real soft shadows, real indirect lighting and real displaced geometry instead of normal mapped simulations. See you then.
|
{"Source-Url": "http://www.humus.name/Articles/PhongIllumination.pdf", "len_cl100k_base": 10818, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29851, "total-output-tokens": 11527, "length": "2e13", "weborganizer": {"__label__adult": 0.0008873939514160156, "__label__art_design": 0.01049041748046875, "__label__crime_law": 0.0005707740783691406, "__label__education_jobs": 0.0009140968322753906, "__label__entertainment": 0.0005078315734863281, "__label__fashion_beauty": 0.00043320655822753906, "__label__finance_business": 0.00031447410583496094, "__label__food_dining": 0.0006537437438964844, "__label__games": 0.004856109619140625, "__label__hardware": 0.005481719970703125, "__label__health": 0.0007290840148925781, "__label__history": 0.0012044906616210938, "__label__home_hobbies": 0.00025177001953125, "__label__industrial": 0.0011138916015625, "__label__literature": 0.0008177757263183594, "__label__politics": 0.0003764629364013672, "__label__religion": 0.00156402587890625, "__label__science_tech": 0.26806640625, "__label__social_life": 0.00013530254364013672, "__label__software": 0.02178955078125, "__label__software_dev": 0.6767578125, "__label__sports_fitness": 0.0005388259887695312, "__label__transportation": 0.0011014938354492188, "__label__travel": 0.0005350112915039062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47151, 0.07052]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47151, 0.55461]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47151, 0.92281]], "google_gemma-3-12b-it_contains_pii": [[0, 4406, false], [4406, 8385, null], [8385, 12702, null], [12702, 17319, null], [17319, 21213, null], [21213, 24475, null], [24475, 27065, null], [27065, 30261, null], [30261, 32018, null], [32018, 37940, null], [37940, 41799, null], [41799, 45318, null], [45318, 47151, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4406, true], [4406, 8385, null], [8385, 12702, null], [12702, 17319, null], [17319, 21213, null], [21213, 24475, null], [24475, 27065, null], [27065, 30261, null], [30261, 32018, null], [32018, 37940, null], [37940, 41799, null], [41799, 45318, null], [45318, 47151, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47151, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47151, null]], "pdf_page_numbers": [[0, 4406, 1], [4406, 8385, 2], [8385, 12702, 3], [12702, 17319, 4], [17319, 21213, 5], [21213, 24475, 6], [24475, 27065, 7], [27065, 30261, 8], [30261, 32018, 9], [32018, 37940, 10], [37940, 41799, 11], [41799, 45318, 12], [45318, 47151, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47151, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
bbd6146a29d7c65d690b5b45e42b75c9a6db07ed
|
Work practices and challenges in pull-based development
The integrator's perspective
Gousios, Georgios; Zaidman, Andy; Storey, Margaret Anne; van Deursen, Arie
DOI
10.1109/ICSE.2015.55
Publication date
2015
Document Version
Accepted author manuscript
Published in
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.
Work Practices and Challenges in Pull-Based Development: The Integrator’s Perspective
Georgios Gousios*, Andy Zaidman†, Margaret-Anne Storey‡, Arie van Deursen†
* Radboud University Nijmegen, the Netherlands
Email: g.gousios@cs.ru.nl
† Delft University of Technology, the Netherlands
Email: {a.e.zaidman, arie.vandeursen}@tudelft.nl
‡ University of Victoria, BC, Canada
Email: mstorey@uvic.ca
Abstract—In the pull-based development model, the integrator has the crucial role of managing and integrating contributions. This work focuses on the role of the integrator and investigates working habits and challenges alike. We set up an exploratory qualitative study involving a large-scale survey of 749 integrators, to which we add quantitative data from the integrator’s project. Our results provide insights into the factors they consider in their decision making process to accept or reject a contribution. Our key findings are that integrators struggle to maintain the quality of their projects and have difficulties with prioritizing contributions that are to be merged. Our insights have implications for practitioners who wish to use or improve their pull-based development process, as well as for researchers striving to understand the theoretical implications of the pull-based model in software development.
I. INTRODUCTION
Pull-based development as a distributed development model is a distinct way of collaborating in software development. In this model, the project’s main repository is not shared among potential contributors; instead, contributors fork (clone) the repository and make their changes independent of each other. When a set of changes is ready to be submitted to the main repository, they create a pull request, which specifies a local branch to be merged with a branch in the main repository. A member of the project’s core team (from hereon, the integrator1) is responsible to inspect the changes and integrate them into the project’s main development line.
The role of the integrator is crucial. The integrator must act as a guardian for the project’s quality while at the same time keeping several (often, more than ten) contributions “in-flight” through communicating modification requirements to the original contributors. Being a part of a development team, the integrator must facilitate consensus-reaching discussions and timely evaluation of the contributions. In Open Source Software (OSS) projects, the integrator is additionally taxed with enforcing an online discussion etiquette and ensuring the project’s longevity by on-boarding new contributors.
The pull-based development process is quickly becoming a widely used model for distributed software development [1]. On GitHub alone, it is currently being used exclusively or complementary to the shared repository model in almost half of the collaborative projects. With GitHub hosting more than 1 million collaborative projects and competing services, such as BitBucket and Gitorious, offering similar implementations of the pull-based model, we expect the pull-based development model to become the default model for distributed software development in the years to come.
By better understanding the work practices and the challenges that integrators face while working in pull-based settings, we can inform the design of better tools to support their work and come up with best practices to facilitate efficient collaboration. To do so, we set up an exploratory qualitative investigation and survey integrators on how they use the pull-based development model in their projects. Our field of study is GitHub; using our GHTorrent database [2], we aimed our survey at integrators from high profile and high volume projects. An explicit goal is to learn from many projects rather than study a few projects in depth. We therefore use surveys as our main research instrument, generously sprinkled with open-ended questions. We motivate our survey questions based on a rigorous analysis of the existing literature and our own experience with working with and analysing the pull-based model during the last 2 years. We conducted a two-round (pilot and main) survey with 21 and 749 respondents respectively.
Our main findings reveal that integrators successfully use pull requests to solicit external contributions and we provide insights into the decision making process that integrators go through while evaluating contributions. The two key factors that integrators are concerned with in their day-to-day work are quality and prioritization. The quality phenomenon manifests itself by the explicit request of integrators that pull requests undergo code review, their concern for quality at the source code level and the presence of tests. Prioritization is also a concern for integrators as they typically need to manage large amounts of contribution requests simultaneously.
II. BACKGROUND AND RELATED WORK
The goal of distributed software development methods is to allow developers to work on the same software product while being geographically and timezone dispersed [3]. The proliferation of distributed software development techniques
1Also referred to as “integration manager”: http://git-scm.com/book/en/Distributed-Git-Distributed-Workflows. We use the term integrator for brevity.
was facilitated by the introduction of online collaboration tools such as source code version control systems and bug databases [4], [5]. The main differentiation across distributed software development methods is the process of integrating an incoming set of changes into a project’s code base. This change integration process has gone through many phases, as the collaboration tools matured and adapted to changing development needs; pull-based development [1] is the latest of those developments.
In distributed software development, the first step towards integrating changes is evaluating the proposed contributions. This is a complex process, involving both technical [6], [7], [8] and social aspects [9], [10], [11].
Mockus et al. [6] analyzed two early OSS communities, Mozilla and Apache, and identified common patterns in evaluating contributions, namely the commit-then-review process. As an alternative, the Apache community also featured a review process through mailing list patch submissions. Rigby and Storey examined the peer review process in OSS mailing lists [7] and found that developers filter emails to reduce evaluation load, prioritize using progressive detail within emails containing patches and delegate by appending names to the patch email recipients. Jiang et al. [8] analyzed patch submission and acceptance in the Linux kernel project, which is using a preliminary pull-based development model, and found that, through time, contributions are becoming more frequent, while code reviews are taking less time.
As the change submission and integration models evolve, so do the evaluation processes. Bacchelli and Bird [12] refer to lightweight, branch-based peer reviews as “modern” code review. This kind of peer review is similar to the reviews taking place in pull-based development in many aspects, with an important difference: the process for accepting a contribution is pre-determined and requires sign-off by a specific number of integrators. They find that while the stated purpose of modern code review is finding defects, in practice, the benefits in knowledge transfer and team awareness outweigh those stemming from defect finding. In a similar quantitative study [13], Rigby and Bird analyzed branch-based code review processes in OSS and commercial systems and found that reviewed patches are generally very small while two reviewers find an optimal number of defects.
In recent work, Gousios et al. [1] and Tsay et al. [14] investigated quantitatively what factors underline the acceptance of contributions in pull-based development; both find similar effects, but the dominating factors (hotness of project area and social distance, respectively) are vastly different. This difference suggests that there may be no underlying processes for contribution evaluation in pull-based development that are in effect across projects. In turn, this calls for a more in-depth, qualitative study to help us understand how integrators evaluate contributions. Initial qualitative evidence on how integrators assess contributions has been reported by Pham et al. [15], but the focus of this work was the evaluation of testing practices rather than the pull-based development model.
A number of social aspects also affect the evaluation of contributions. Duchneaut found that developers looking to get their contributions accepted must become known to the core team [10]. Then, core team members would use the developer’s previous actions as one of the signals for judging contributions. Similarly, Krogh et al. [9] found that projects have established implicit “joining scripts” to permit new developers to contribute to the project, according to which they examine the developers’ past actions to permit access to the main repository. There is no empirical evidence on whether the developer’s previous actions play a significant role in contribution assessment in the context of pull-based development; in fact, quantitative data from Gousios et al. [1] suggest otherwise. Finally, Marlow et al. [11] found that developers on GitHub use social signals, such as the developer’s coding activity and the developer’s social actions (e.g. following other developers), in order to form an impression of the quality of incoming contributions.
III. RESEARCH QUESTIONS
Our examination of the literature revealed that while several researchers have examined how developers evaluate contributions and collaborate in the context of OSS or, more recently, GitHub, no work has examined yet how integrators perceive pull-based development. With pull-based development rapidly rising in popularity, it is important to expand our understanding of how it works in practice and what challenges developers in general and integrators in particular face when applying it. Consequently, our first question explores how integrators employ the pull-based development model in their projects at the project level:
RQ1: How do integrators use pull-based development in their projects? To make the analysis easier, we further refine RQ1 in the following subquestions:
- RQ1.1 How do integrators conduct code reviews?
- RQ1.2 How do integrators merge contributions?
After a contribution has been received, the integrators must decide whether it is suitable for the project or not. Recent quantitative work identified that, across projects, simple factors such as the recent activity of the project area affected by the contribution [1] and social distance between the contributor and the integrator [14] can be used to predict whether a contribution will be accepted or not. What criteria do the integrators use to make this decision? This motivates our second research question:
RQ2: How do integrators decide whether to accept a contribution?
When evaluating contributions in collaborative environments, a common theme is quality assessment [6], [7], [12]. In the context of pull-based development, the asynchrony of the medium combined with its high velocity may pose additional (e.g. timing) requirements. It is beneficial to know what factors the integrators examine when evaluating the quality of a contribution and what tools they use to automate the inspection, as the results may be used to design tools that
automate or centralize the evaluation process. Therefore, our third research question is as follows:
**RQ3:** How do the integrators evaluate the quality of contributions?
On busy projects, or in projects with busy integrators, contributions can pile up. It is not uncommon for large projects (for example Ruby on Rails) to have more than 100 pull requests open at any time. How do integrators cope with such a situation? How do they select the next contribution to work on when many need their immediate attention? This leads to our fourth research question:
**RQ4:** How do the integrators prioritize the application of contributions?
The challenges of online collaboration have been a very active field of study, also in the field of distributed software development [4]. The pull-based development setting is unique: the asynchrony between the production of the code and its integration in a project’s code base along with the increased transparency afforded by platforms like GitHub, theoretically allow contributors and integrators to co-ordinate more efficiently. But is this so? How do integrators perceive the theoretical advantages of pull-based development in practice? By understanding the challenges that integrators face when applying pull-based development in their projects, we may better understand the limits of the pull-based method and inform the design of tools to help integrators cope with them.
This leads to our final research question:
**RQ5:** What key challenges do integrators face when working with the pull-based development model?
IV. **Study Design**
We conducted a mixed-methods exploratory study, using mostly qualitative but also quantitative data, that consisted of two rounds of data collection. In the first round, we run a pilot survey among a limited set of selected integrators. After analyzing the results of the first round, we identified emerging themes (specifically, quality and prioritization), which we address by including related questions in the second round. The survey results of the second round were further augmented, and partitioned by, quantitative results for each specific project. In this section, we describe our research method in detail.
A. **Protocol**
Since our aim is to learn from a large number of projects, we used surveys which scale well.
**Survey Design** The study took place in two rounds, a pilot round that gave us the opportunity to field test our initial questions and the final round through which we gathered the actual responses.
Both surveys were split into three logical sections; demographic information, multiple choice or Likert-scale questions and open-ended questions. The open-ended questions were intermixed with multiple choice ones; usually, the developer had to answer an open-ended question and then a related one with fixed answers. To further elicit the developers’ opinions, in all questions that had predefined answers but no related open-ended question, we included an optional “Other” response. Finally, we intentionally used even Likert scales to force participants to make a choice. Overall, and excluding demographic questions, the survey included 7 open-ended questions, 7 Likert scale questions with an optional open-ended response and 6 multiple choice questions with no optional fields. The survey could be filled in in about 15 minutes.
The purpose of the survey pilot was to identify themes on which we should focus the main survey. As such, the pilot survey included fewer open-ended questions, but all multiple choice questions have optional open-ended reply fields. This allowed us to test our initial question set for strongly correlated answers (we removed several potential answers from multiple choice questions) and identified two topics, namely quality and prioritization which we addressed in the main survey round.
**Attracting participants** In previous work [1], we presented evidence that most repositories on GitHub are inactive, single user projects. To ensure that our sample consisted of repositories that make effective and large scale use of pull requests, we selected all repositories in our GHTorrent dataset [2] that have received at least one pull request for each week in the year 2013 (3,400 repositories). For the selected repositories, we extracted the top pull request integrators, as identified by the number of pull requests that they have merged, and build our correspondence list.
For the pilot phase, we emailed 250 of those integrators randomly and received 21 answers (8% answer rate). For the data collection phase, we emailed integrators from the remaining 3,150 projects and received 749 answers (23% answer rate). The survey’s web address was sent by personal email to all participants. We did not restrict access to the survey to invited users only. In fact, several survey respondents forwarded the survey to colleagues or advertised it on social media (Twitter) without our consent. After comparing the response set with the original set of projects we contacted, we found that 35% of the responses came through third party advertising of the survey. The survey ran from April 14 to May 1, 2014.
To encourage participation, we created a customized project report for each project in our correspondence list. The report included plots on the project’s performance in handling pull requests (e.g. mean close time) on a monthly basis. The reports for all projects have been published online[2] and since then have been widely circulated among developers. Of the 749 survey respondents, 138 also expressed gratitude for their report through email.
B. **Participants**
The majority of our respondents self-identified as project owners (71%), while 57% work for industry. Most of them also have more than 7 years of software development experience (81%) and considerable experience (> 3 years) in geographically distributed software development (76%).
To identify the leading groups of respondents based on the combined effect of experience, role in the project and work place, we ran the kmodes clustering algorithm (a variation of kmeans for categorical data) on the dataset. The clustering results revealed that 1/4 of the respondents (275/749) are project owners with more than 7 years of industrial experience; of those, around 40% (108/275) also worked exclusively on the projects they responded about.
C. Analysis
We applied manual coding on the seven open-ended questions as follows: initially, three of the four authors individually coded a different set of 50 (out of 750) answers for each question. At least one and up to three codes were applied to each answer. The order of code application reflected the emphasis each answer gave on the code topic. The extracted codes were then grouped together and processed to remove duplicates and, in cases, to generalize or specialize them. The new codes were then applied on all answers by the first author. When new codes emerged, they were integrated in the code set. On average, 30% more codes were discovered because we decided to code the full dataset.
In the survey, we asked integrators to optionally report a single repository name for which they handle most pull requests. 88% of the respondents did so. For the remaining 83 answers, we either resolved the repository names from the developer’s emails (since integrators were invited to participate based on a specific email), or selected the most active project the developer managed pull requests for, while we also fixed typos in repository names. We excluded from further analysis answers for which we could not obtain a repository name (61 answers). After we resolved the repository names, we augmented the survey dataset with information from the GHTorrent database [2]. Specifically, for each project, we calculated the mean number of pull requests per month and the mean number of integrators for the period July 2013 to July 2014. Using those metrics, and for each one of them, we split the project population in three equally sized groups (small, medium and large). Finally, we excluded answers from projects that received no pull request in this time frame (14 answers). None of these were in our original contact list.
V. Results
In this section, we present our findings per research question. To enable traceability, we include direct quotes from integrators along with the answer identified in our dataset (e.g. R1 corresponds to answer 1). Similarly, in the case of coded open-ended questions, we present the discovered codes slanted.
A. RQ1: How do integrators use pull-based development in their projects?
1) Overall use: To understand why and how projects use the pull-based development model, we asked integrators a multiple choice question that included the union of potential uses of pull requests that have been reported in the literature [1], [16], [15]. Respondents also had the opportunity to report other uses not in our list.
Overwhelmingly, 80% of the integrators use the pull-based development model for doing code reviews and 80% to resolve issues. Perhaps more interesting is that half of the integrators use pull requests to discuss new features (as R710 commented: “experimenting with changes to get a feel if you are on the right path”). This is a variation of the GitHub-promoted way of working with pull requests, where a pull request is opened as early as possible to invite discussion on the developed feature.
60% of the integrators use pull requests to solicit contributions from the community (people with no direct commit access to the repository), which seems low given the open nature of the GitHub platform. We examined this response quantitatively, using the GHTorrent database: indeed for 39% percent of the projects that responded, no pull request originated from the project community. There is a small overlap (30%) between projects responding that they do not use pull requests to solicit contributions from the community and those that actually did not receive a pull request. Moreover, another 28% of the projects reported that they have used pull requests to solicit contributions from the community even though they did not receive any external pull requests.
Only 4% (or 29) of the respondents indicated that they use pull requests for something else. The analysis of the answers reveals that the majority of the replies nevertheless aligns with the offered choice answers with two notable exceptions. Respondent R635 mentions that they use pull requests in “every commit we make. We have a policy of having every commit, even bumping up version number for next release, coming in on a PR.”. The project has effectively turned pull requests into a meta-version control system, one that only allows reviewed code to be merged. This merging behaviour is also in place within Microsoft [12] and in the Android project [13]. Another integrator is using pull requests as a time machine mechanism: R521: “Ideally, any change, because using PRs makes it easier to rollback a change if needed”.
2) Code reviews: In the time between a pull request submission and before it is accepted, it becomes a subject of inspection. 75% of the projects indicate that they do explicit code reviews on all contributions (only 7% of the projects do not review their pull requests using GitHub, but those have specified alternative ways of doing code reviews as described below). On GitHub, anyone can participate in the inspection process. 50% of the integrators report that the project’s community actively participates in code reviews; this is in contrast with Gousios et al. [1], where we found that in all projects we examined, the community discussing pull requests was bigger than the core team.
In current code reviewing practices, using tools such as Gerrit [13] or Codeflow [12], code review comments are intermingled with code and a predetermined approval process is in place. GitHub offers a more liberal code reviewing system where users can provide comments on either the pull requests.
request as a whole, the pull request code or even in individual commits comprising the pull request, but imposes no approval process. 75% of the integrators use inline code comments in the pull request to do code reviews; only 8% of the integrators report that they use commit comments. The absence of strict acceptance process support has created a market for code reviewing tools: of the 7% (or 52) of the integrators that indicated they are doing code reviews in another way, 20% (or 10) mentioned that they are explicitly using a different tool for doing code reviews.
Projects have established processes for doing code reviews. One of them is delegation; 42% of the integrators delegate a code review if they are not familiar with the code under review. Delegation is again not a strictly defined process on GitHub; by convention, it can occur by referencing (@username) a user name in the pull request body, but integrators report other ways to delegate work: for example, R62 uses video conferencing to discuss pull requests and assign work load, while others (e.g. R577, R587) use external tools with support for delegation. Another process is implicit sign-off: at least 20 integrators reported that multiple developers are required to review a pull request to ensure high quality. Typically this is 2 reviewers, e.g. R481: “We have a rule that at least 2 of the core developers must review the code on all pull requests.”. Rigby and Bird also report a similar finding in Gerrit-based industrial projects [13].
3) Integrating Changes: When the inspection process finishes and the contributions are deemed satisfactory, they can be merged. A pull request can only be merged by core team members. The versatility of Git enables pull requests to be merged in various ways, with different levels of preservation of the original source code properties. Briefly, a pull request can be integrated either through GitHub’s facilities or a combination of low level git commands, such as merge or cherry-pick.
We gave integrators a list of 4 ways to perform merges, as identified in [17], and asked them how often they use them, but also allowed them to describe their own. In 79% of the cases, integrators use the GitHub web interface “often or always” to do a merge; this number is actually close to what we obtained by quantitatively analyzing pull requests in [17] and [1]. Only in 8% and 1% of the cases do integrators resort to cherry-picking or textual patches respectively to do the merge.
As identified by the integrators in the comments, the command-line git tool is mostly used in advanced merging scenarios where conflicts might occur. 4% (or 28) of the respondents mentioned that they are using rebasing (history rewriting) in the following ways: i) placing the new commits in the source branch on top of the current ones in the target branch (e.g. R306 and R316), which effectively merges the two branches while avoiding redundant merge commits, and ii) asking the contributor to squash pull request commits into one before submitting the pull request. Moreover, integrators indicated that they allow their continuous integration system to do the merge (e.g. R157) or use scripts to automate merges between feature branches (e.g. R321).
Fig. 1: Signals used by integrators when deciding on whether a contribution will be accepted or not.
Overall, integrators emphasize the preservation of commit metadata by avoiding textual patches and cherry-picking, while some of them use history rewriting to avoid the formation of complicated networks of branches and merges.
RQ1: Integrators successfully use the pull-based model to accommodate code reviews, discuss new features and solicit external contributions. 75% of the integrators conduct explicit code reviews on all contributions. Integrators prefer merges that preserve commit metadata.
B. RQ2: How do integrators decide whether to accept a contribution
The second research question elicits the signals that integrators use to decide on the fate of a contribution. We asked integrators an optional open-ended question and received 324 answers. The results are summarized in Figure 1.
The most important factor leading to acceptance of a contribution is its quality. Quality has many manifestations in our response set; integrators examine the source code quality and code style of incoming code, along with its documentation and granularity: “Code style and whether or not it matches project style. Overall programming practice, lack of hacks and workarounds.” (R32). At a higher level, they also examine the quality of the commit set and whether it adheres to the project conventions for submitting pull requests.
A second signal that the integrators examine is project fit. As respondent R229 states: “The most important factor is if the proposed pull request is in line with the goals and target of the project”. A variation is technical fit: does the code fit the technical design of the project (R90: “Most important to us is that the contribution is in keeping with the spirit of the project’s other APIs, and that its newly introduced code follow the total and functional style of the rest of the codebase”). Integrators also examine the importance of the fix/feature with
respect to the current priorities of the project. This is common in case of bug fixes: “If it fixes a serious bug with minimal changes, it’s more likely to be accepted.” (R131).
A third theme that emerged from the integrator responses is testing. Apart from assessing the quality of contributions using higher level signals, integrators also need to assess whether the contributed code actually works. Initially, integrators treat the existence of testing code in the pull request as a positive signal. Success of test runs by a continuous integration system also reinforces trust in the code: “All tests must pass integration testing on all supported platforms...” (R94). Finally, integrators resort to manual testing if automated testing does not allow them to build enough confidence: “If other developers verified the changes in their own clones and all went fine, then we accept.” (R156).
It is interesting to note that the track record of the contributors is ranked low in the integrator check list. This is in line with our earlier analysis of pull requests, in which we did not see a difference in treatment of pull requests from the core team or from the project’s community [1].
Finally, technical factors such as whether the contribution is in a mergeable state, its impact on the source code or its correctness are not very important for the eventual decision to merge to the majority of respondents. In such cases, integrators can simply postpone decisions until fixes are being provided by the contributors: “…occasionally I go through discussion with committer on how to do things better or keep the code-style held in the whole project” (R300). The postponing effect has also been observed by Rigby and Storey [7].
**RQ2:** Integrators decide to accept a contribution based on its quality and its degree of fit to the project’s roadmap and technical design.
C. **RQ3:** What factors do the integrators use to examine the quality of contributions?
When examining contributions, quality is among the top priorities for developers. With this research question, we explore how integrators perceive quality and what tools they use to assess it, by means of a pair of compulsory open-ended and multiple choice questions. The results are summarized in Figure 2.
1) **Perception:** One of the top priorities for integrators when evaluating pull request quality is conformance. Conformance can have multiple readings: For R39, conformance means “it matches the project’s current style (or at least improve upon it)” (project style) while for R155 conformance is to be evaluated against fitting with internal API usage rules (architecture fit). Many integrators also examine conformance against the programming language’s style idioms (e.g. PEP8 for Python code). Integrators expect the contributed code to cause minor friction with their existing code base and they try to minimize it by enforcing rules on what they accept.
Integrators often relate contribution quality to the quality of the source code it contains. To evaluate source code quality, they mostly examine non-functional characteristics of the changes. Source code that is understandable and elegant, has good documentation and provides clear added value to the project with minimal impact is preferred.
Apart from source code, the integrators use characteristics of the pull request as proxies to evaluate the quality of the submission. The quality (or even the existence) of the pull request documentation signifies an increased attention to detail by the submitter: “A submitter who includes a clear description of what their pull request does have usually put more time and thought into their submission” (R605). The integrators also examine the commit organization in the pull request: “well written commit messages; one commit about a single subsystem — each commit compiles separately” (R610) and its size. In the latter case, the integrators value small pull requests as it is easier to assess their impact (R246: “…the code has the minimum number of lines needed to do what it’s supposed to do” or R330: “is the diff minimal?”).
Testing plays an important role in evaluating submissions. Initially, the very existence of tests in the pull request is perceived as a positive signal. The integrators also examine whether the changes in the pull request are covered by existing or new tests (test coverage), while, in 4% of the cases, they report that they exercise the changes manually (manual testing). Moreover, in performance-critical code, performance degradation is frowned upon and some cases, integrators require proof that performance is not affected by the proposed change, e.g. in R72: “Performance related changes require test data or a test case”.
Finally, integrators use social signals to build trust for the examined contribution. The most important one is the contributor’s reputation. The integrators build a mental profile for the contributor by evaluating their track record within the project (R405: “Who submitted the PR and what history did we have with him/her?”) or by searching information about the contributor’s work in other projects (R445: “looking at...”)
**Fig. 2:** Factors that integrators examine when evaluating the quality of contributions.
the other contributions in other projects of the pull author”"). Some integrators also use interpersonal relationships to make judgements for the contributor and, by proxy, for their work. The process of impression building through social signals has been further elaborated by Marlow et al. [11].
2) Tools: Quality evaluations can be supported by tools. To evaluate how often projects use tools, we gave integrators a selection of tools and asked them which ones they use in their projects. The vast majority (75%) of projects use continuous integration, either in hosted services or in standalone setups. Continuous integration services, such as Travis and CloudBees, allow projects to run their test suites against incoming pull requests, while integration with GitHub enables them to update pull requests with test outcomes. On the other hand, few projects use more dedicated software quality tools such as metric calculators (15%) or coverage reports (18%). It is interesting to note that practically all (98%) projects that use more advanced quality tools, run them through continuous integration.
99 integrators responded that they are using other tools. By going through the responses, we see that integrators use a rather limited toolset. Specifically, only a handful of integrators reported that they are using linting tools4 while dedicated static analysis tools are used in just two large scale C++ projects in our sample. In two more cases, the integrators reported that they rely on the language’s type system to eliminate bugs. Finally, the majority of integrators answered that they evaluate the quality manually (e.g. R291: “my brain is a powerful testing environment” or R353: “good eyes and many eyes”) even when they were asked what tools they are using to do so.
RQ3: Top priorities for integrators when evaluating contribution quality include conformance to project style and architecture, source code quality and test coverage. Integrators use few quality evaluation tools other than continuous integration.
D. RQ4: How do the integrators prioritize the application of contributions?
Our fourth research question examines the factors integrators use to prioritize their work on evaluating contributions. To discover them, we asked integrators a compulsory open-ended question. The results are summarized in Figure 3.
The first thing that integrators examine is the contribution’s urgency. In case of bug-fixing contributions, the criticality of the fix is the most important feature to prioritize by. Integrators examine at least the following factors to assess criticality: i) the contribution fixes a security issue, ii) the contribution fixes a serious new bug, iii) the contribution fixes a bug that other projects depend upon, and iv) number of issues blocked by the unsolved bug.
In the case of a contribution implementing new features, integrators examine whether the contribution implements customer requested features or features required for the development of other features. Several integrators also mentioned that they just examine the type of the contribution before its criticality; it is usually project policy to handle bug fixing contributions before enhancements, as is the case with R446: “Bug fixes first, then new features. Only if all bug fix pull requests are treated.”
The pull request age plays an important role in prioritization for integrators. It is interesting to note that many integrators prefer a first-in, first-out treatment of the pull requests before applying other prioritization criteria. Similarly, easy to assess (and therefore less complex) pull requests are preferred by integrators. The size of the patch, even through usually related to complexity, is used to quickly filter out small, easy to integrate contribution and process them first (e.g. R490: “The lower the number of lines/files changes, the more likely I am to process it first.”)
The contributor’s track record is a relatively important factor for prioritization and usually known contributors get higher priority. As R82 states it: “If I know the person, they get high priority. Sorry, strangers.” A related criterion is the contributor’s origin; if the contributor is another core team member or, in business settings, a colleague, some projects assign priorities to his/her contributions (e.g. R106, R183, R411), while some others specifically favour community contributions (e.g. R161, R398).
Finally, it is interesting to note that 18% of all integrators in our sample are not using any prioritization processes at all.
When prioritizing contributions, integrators must apply multiple criteria in a specific sequence. Figure 3 depicts the frequencies of prioritization criteria usage for all reported application sequences. What we can see is that criticality,
urgency and change size contribute to most prioritization criteria application sequences, while most integrators report that they apply at most two prioritization criteria.
**RQ4**: Integrators prioritize contributions by examining their criticality (in case of bug fixes), their urgency (in case of new features) and their size. Bug fixes are commonly given higher priority. One fifth of the integrators do not prioritize.
E. **RQ5**: What key challenges do integrators face when working with the pull-based development model?
We asked integrators an optional open-ended question and received 410 answers. We found two broad categories of challenges: technical challenges hamper the integrator’s ability to work effectively, while social challenges make it difficult for integrators to work efficiently with other project members.
1) **Technical challenges**: At the project level, maintaining quality is what most integrators perceive as a serious challenge. As incoming code contributions mostly originate from non-trusted sources, adequate reviewing may be required by integrators familiar with the project area affected by it. Reviewer availability is not guaranteed, especially in projects with no funded developers. Often, integrators have to deal with solutions tuned to a particular contributor requirement or an edge case; asking the contributor to generalize them to fit the project goals is not straightforward. A related issue is feature isolation; contributors submit pull requests that contain multiple features and affect multiple areas of the project. As put by R509: “Huge, unwieldy, complicated bundles of ‘hey I added a lot of features and fixes ALL AT ONCE!’ that are hell to review and that I’d like to *partially* reject if only the parts were in any way separable…”.
Several issues are aggravated the bigger or more popular a project is. Integrators of popular projects mentioned that the volume of incoming contributions is just too big (e.g. Ruby on Rails receives on average 7 new pull requests per day) consequently, they see triaging and work prioritization as challenges. As requests are kept on the project queue, they age: the project moves ahead in terms of functionality or architecture and then it is difficult to merge them without (real or logical) conflicts. Moreover, it is not straightforward to assess the impact of stale pull requests on the current state of the project or on each other.
Another category of technical challenges is related to the experience of the contributor. Integrators note that aspiring contributors often ignore the project processes for submitting pull requests leading to unnecessary communication rounds. When less experienced developers or regular users attempt to submit a pull request, they often lack basic git skills (e.g. R42: “Lack of knowledge of git from contributors; most don’t know how to resolve a merge conflict.”). New contributors can be a valuable resource for a project; integrators report that they avoid confrontation in an effort to onboard new users.
Many of the challenges reported by the integrators are bound to the distributed nature of pull-based development. Lack of responsiveness on behalf of the contributor hurts the code review process and, by extension, project flow. This is especially pronounced in the case of hit and run pull requests, as they place additional reviewing and implementation burden on the integrator team. Integrators mention that the lack of centralized co-ordination with respect to project goals can lead to “chaos. Lots of people trying to reach the same goal without coordinating” (R155).
Finally, integrators also report inefficiencies in the GitHub platform itself. Specifically, many integrators complained about the quality of the code review tool offered by GitHub (R567: “A good code review tool with code analysis possibilities can help”) and made comparisons to their favourite ones (e.g. R288: “The mechanism itself is a huge step backwards from Reviewboard”) while others did not like the way GitHub handles notifications (e.g. R514: “Sifting through the GitHub information flood to find what, if any, I should address.”).
2) **Social challenges**: Integrators often have to make decisions that affect the social dynamics of the project. Integrators reported that explaining the reasons for rejection is one of the most challenging parts of their job as hurting the contributor’s feelings is something they seek to avoid. As R255 explains: “Telling people that something is wrong without hurting their feelings or giving them an incorrect idea of my intentions.”. Similarly, integrators find that asking for more work from the contributors (e.g. as a result of a code review) can be difficult at times, as they “…worry about alienating our valued contributors” (R635). Motivating contributors to keep working on the project, even in the face of rejected contributions, is not easy for integrators either.
---
5 R708 describes hit and run pull requests nicely: “They (contributors) send a pull request with a bug but when I ask them to fix them then they just vanish and don’t respond to GitHub e-mails.”
Reaching consensus through the pull request comment mechanism can be challenging. Integrators often find themselves involved in a balancing act of trying to maintain their own vision of the project’s future and incorporating (or rejecting) contributions that are tuned to the contributor’s needs. Differences in opinion compared to the relative anonymity of the pull request comment mechanism can lead to unpleasant situations. Integrators may need to take action to maintain discussion etiquette (e.g. R449 “Dealing with loud and trigger-happy developers.”), enforce politeness rules or to stop long, unhelpful (bikeshedding) discussions (R586: “be objective and avoid off-topics in discussions”). Multiple communication channels are not helping either; integrators find it difficult to synchronize between multiple sources.
On a more personal level, integrators find it difficult to handle the workload imposed by the open submission process afforded by the pull-based development model. For many of our respondents, managing contributions is not their main job; consequently finding free time to devote on handling a pull request and context switching between various tasks puts a burden on integrators. As R470 notes: “Managing pull requests is not my full-time job, but it is a component of it. Mostly it is difficult to keep track of them while also completing my other tasks.”
RQ5: Integrators are struggling to maintain quality and mention feature isolation and total volume as key technical challenges. Social challenges include motivating contributors to keep working on the project, reaching consensus through the pull request mechanism and explaining reasons for rejection without discouraging contributors.
VI. DISCUSSION
In this section, we compare and contrast our findings with existing work and present future work directions.
A. Quality
Throughout our analysis, the issue of quality evaluation was recurring. The respondents directly linked quality with acceptance while also described maintaining quality as a big challenge. According to integrators, quality emerges from attention to detail; code style, documentation, commit formatting and adherence to project conventions all help to build confidence in the contribution. The issue of quality evaluation has been repeatedly mentioned in works on patch submission [7], [18], lightweight code review [12], [13] and testing [15]; in this sense, our work reinforces earlier findings. In addition, we document in detail what factors integrators examine in contributions when doing quality assessments.
An open question is how to efficiently automate the quality evaluation for pull requests. While tools that automate the evaluation of many tasks that the developers do to determine quality (e.g. code style analyzers, test coverage, metrics for software quality, impact analysis etc) do exist, we have seen that developers go little beyond testing and continuous integration. To solve this issue, one could envisage a pluggable platform that, given a pull request update, runs a suite of tools and automatically updates the pull request with a configurable quality score. For the platform to be useful, it will have to automatically learn from and adapt to project-specific behaviours.
B. Testing
Integrators overwhelmingly use testing as a safety net when examining contributions. The inclusion of tests in a contribution is perceived as a positive signal, while (reverse) coverage is evaluated by many integrators. 75% of our respondents run tests automatically through continuous integration services. Pham et al. examined how testing works on GitHub [15]; our work confirms many of their findings (e.g. use of testing as a quality signal, manual examination when continuous integration fails) and complements it with more quantitative data about test diffusion on GitHub projects. Moreover, it is interesting to pinpoint the contradiction with the results of our previous work [1], where we found that inclusion of test code in a contribution was not a strong factor influencing either the decision to accept or the time to decide (Tsay et al. [14] report a similar result). We speculate that this difference is due to how we modeled test inclusion (continuous rather than a dichotomous feature) in our previous study.
C. Work Prioritization
In large projects, integrators cannot keep up with the volume of incoming contributions. A potential solution could be a recommendation system that provides hints on which contributions need the integrator’s immediate attention. Existing work on assisted bug triaging (e.g. [19] or [20]) is not directly applicable to the pull-based model, as a pull request is not necessarily as static as a bug report. Researchers might need to come up with different methods of work prioritization that take into account the liveness and asynchrony of the pull-request model. Our analysis of how developers prioritize contribution is a first step in this direction.
D. Developer Track Records
One finding of this work is that a developer’s track record, while present in our response set, is not a commonly used criterion to assess or prioritize contributions by. With the raise of transparent work environments [16], and based on previous work on the subject [9], [11], one would expect that the developer’s track record would be used by the majority of integrators to make inferences about the quality of incoming contributions. Despite this, the track record is mostly used as an auxiliary signal; in both Figure 2 and Figure 3, we can see that developers equally mentioned the track record as top and second criterion for quality evaluation and prioritization.
E. Community Building
Community building through collaboration has been studied extensively in the context of OSS projects [9], [21], [22]. A common theme in those studies is that recruitment of new developers can be challenging [21], as core teams are reluctant to give access to the main repository without an initiation process [9]. Integrators in our study actually mentioned the
opposite: it is maintaining the community momentum and motivating contributors to do more work that is not easy. Through transparency [16] and lowered barriers to participation [15], [11], the pull-based model can act as glue for communities build around projects, if integrators are keen enough on fostering their project’s communities by helping newcomers cope with tools and project processes, prioritizing the examination of community contributions and, in the extreme case, not rejecting unwanted contributions.
F. A modern theory of software change
In the recent years, we are witnessing that collaborative, lightweight code review is increasingly becoming the default mechanism for integrating changes, in both collocated [12] and distributed [13], [1] development. Effectively, the pull request (in various forms) is becoming the atomic unit of software change. Existing works (e.g. [23], [24]) neither did anticipate lightweight code reviews nor asynchronous integration of changes. This work can contribute to theory building by providing empirical evidence about the common practices of pull-based development.
VII. LIMITATIONS
We carefully designed the survey to gain insight into the work practices and challenges faced by integrators in pull-based development. We thoughtfully crafted the wording of each of the questions (to avoid ambiguous or leading questions), refining them through small pilot tests and consults with other researchers with survey research expertise, and refined the questions yet further through a larger pilot study. The response categories we supplied for many of the questions were based on the existing literature, and were likewise refined through the pilot studies. For the questions that had multiple response options, we supplied an additional “other” field which was used to uncover responses not considered that we later coded. Despite our best efforts, this work may be subject to the following limitations:
Generalizability: Since we did purposive sampling from the population of integrators, the findings may not apply to other populations of integrators (e.g. developers using other tools, integrators that work private projects on GitHub or integrators that are not in the top three integrators for a given project). Moreover, in previous work [1], we found that the median number of pull requests across repositories is 2; in our sample, the smallest project had more than 400. We expect that if the study is repeated using random sampling for projects, the results will be slightly different, as the average project does not use pull requests in a high capacity. Furthermore, the integrators that responded to our survey may have introduced an additional bias to the results (non-responders may have had different insights or opinions).
Researcher bias: It is possible that researcher bias may have influenced the wording of questions (perhaps to be leading) as well as the coding of the open ended questions. As discussed above, we tested the questions through pilots and had experts evaluate it for this concern. In terms of the analysis of the open ended questions, we conducted a pilot study, and three of us separately coded a sample of the responses to derive these codes.
Research reactivity: The ordering of questions (one may provide context for the next one), the open ended questions, as well as a respondent’s possible tendency to to appear in a positive light (for example, they wish to think they are fair or logical), may have influenced the accuracy of the answers provided.
VIII. CONCLUSIONS
Our work studies the pull-based development model from the integrator’s perspective. Our goal is to better understand the work practices of integrators working with the pull-based development model and to identify the challenges they face when integrating contributions. The key contributions of this paper are as follows:
- A novel way of using the GHTorrent dataset to generate targeted reports, large scale surveys and augmenting qualitative datasets with quantitative data.
- A publicly available data set with 749 anonymized survey answers.
- A thorough analysis of survey data resulting in answers to our research questions on topics such as work practices in pull-based development, quality evaluation of contributions, work prioritization and open challenges when working with pull requests.
Our anonymized response set, our coded open-ended questions and custom-built R-based analysis and plotting tools are available in the Github repository gousiosg/pullreqs-integrators. This data set complements existing quantitative data sets (e.g. our own GHTorrent data set) and provides much needed context for analyzing and interpreting that data. Furthermore, our survey brings additional insights to the insightful but smaller scale interviews that have been conducted by other researchers on the pull based model (e.g. [16], [11], [15], [14]). We welcome replications of this work; potential directions include replications with integrators that (1) use different (non-GitHub) repositories, e.g., Bitbucket, (2) work on private repositories, and (3) work on non-pull request intensive projects. These replications will help in moving towards a theory of how pull-based development impacts distributed software development.
Last but not least, our findings point to several research directions (see Section VI) and have implications for both practice and research. Based on our results, integrators can structure their contribution evaluation processes in an optimized way and be informed about common pitfalls in community management. Researchers can reuse our research methods and datasets to conduct large scale, mixed-methods research, while they can use our research findings as a basis to drive their work on pull request quality evaluation and work prioritization tools.
Acknowledgements The authors would like to thank the survey participants for their time. This work has been partially funded by the NWO 639.022.314 — TestRoots project.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/7470978/pullreqs_integrators.pdf", "len_cl100k_base": 11148, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 38521, "total-output-tokens": 13953, "length": "2e13", "weborganizer": {"__label__adult": 0.00037741661071777344, "__label__art_design": 0.0003161430358886719, "__label__crime_law": 0.000274658203125, "__label__education_jobs": 0.0015058517456054688, "__label__entertainment": 4.559755325317383e-05, "__label__fashion_beauty": 0.00014209747314453125, "__label__finance_business": 0.0002562999725341797, "__label__food_dining": 0.0002772808074951172, "__label__games": 0.0004398822784423828, "__label__hardware": 0.0003910064697265625, "__label__health": 0.0003101825714111328, "__label__history": 0.0001729726791381836, "__label__home_hobbies": 7.587671279907227e-05, "__label__industrial": 0.00022494792938232425, "__label__literature": 0.00022494792938232425, "__label__politics": 0.00022482872009277344, "__label__religion": 0.0003902912139892578, "__label__science_tech": 0.002208709716796875, "__label__social_life": 0.00012958049774169922, "__label__software": 0.004001617431640625, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0002765655517578125, "__label__transportation": 0.00035762786865234375, "__label__travel": 0.0001704692840576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62964, 0.02466]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62964, 0.08653]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62964, 0.93115]], "google_gemma-3-12b-it_contains_pii": [[0, 1316, false], [1316, 6607, null], [6607, 12842, null], [12842, 18800, null], [18800, 24906, null], [24906, 30160, null], [30160, 35428, null], [35428, 40236, null], [40236, 45388, null], [45388, 51452, null], [51452, 57479, null], [57479, 62964, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1316, true], [1316, 6607, null], [6607, 12842, null], [12842, 18800, null], [18800, 24906, null], [24906, 30160, null], [30160, 35428, null], [35428, 40236, null], [40236, 45388, null], [45388, 51452, null], [51452, 57479, null], [57479, 62964, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62964, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62964, null]], "pdf_page_numbers": [[0, 1316, 1], [1316, 6607, 2], [6607, 12842, 3], [12842, 18800, 4], [18800, 24906, 5], [24906, 30160, 6], [30160, 35428, 7], [35428, 40236, 8], [40236, 45388, 9], [45388, 51452, 10], [51452, 57479, 11], [57479, 62964, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62964, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
6145e41484e9defc3ac49e18930ab7f4f51171e1
|
26
Guide to Effective
Auto-Generated Spatial
Queries
Eric Johnson
26.1 Introduction
Intelligent position selection for agents—that is, analyzing the environment to find the best location for a given behavior—has evolved rapidly as spatial query systems such as CryENGINE’s Tactical Point System and Unreal Engine 4’s Environment Query System have matured. Once limited to evaluating static, preplaced markers for behaviors such as finding cover or sniping posts, dynamic generation gives us the ability to represent a much wider and more sophisticated range of concepts. The ability to generate points at runtime allows us to sample the environment at arbitrary granularity, adapting to changes in dynamic or destructible environments. In addition, when used to generate a short-term direction rather than a final destination, we can represent complex movement behaviors such as roundabout approaches, evenly encircling a target with teammates, or even artificial life algorithms such as Craig Reynolds’s boids (Reynolds 1987), all while navigating arbitrary terrain.
Originally developed as a generalized, data-driven solution for selecting pregenerated points in the environment, Crysis 2’s Tactical Point System (TPS) is now freely available to the public as part of CryENGINE, while Bulletstorm’s Environmental Tactical Querying
system is now integrated into Unreal Engine 4 as the Environment Query System (EQS), making these techniques accessible to a massive audience (Jack 2013, Zielinsky 2013). As game environments grow increasingly complex, other studios are also adopting this approach with implementations like the Point Query System in FINAL FANTASY XV and the SQL-based SpatialDB in MASA LIFE (Shirakami et al. 2015, Mars 2014).
Designing effective queries is the key to maximizing the quality of agent position selection while dramatically reducing the amount of work required to implement and tune these behaviors. Done well, you can consolidate the majority of a game’s position selection logic into a library of queries run on a spatial query system, rather than managing a collection of disparate and independent algorithms. However, the functionality of these systems has become increasingly sophisticated as they gain wider adoption, presenting developers with more possibilities than ever before. This introduces new challenges to use the array of tools and techniques at our disposal effectively.
In this chapter, we present a selection of tricks and techniques that you can integrate into your agent’s queries to ultimately deliver higher quality, more believable behavior. Each component of a spatial query is covered, from sample generation to failure resistance, to improve the effectiveness of spatial queries in your project.
### 26.2 Overview
In modern implementations, a single spatial query generally consists of the following components:
- **Sample points**: Locations in the world which we want to evaluate in order to determine their suitability for a particular movement task.
- **Generator**: Creates the initial set of sample points in the environment. For example, one type of generator might create a 100 m 2D grid of points along the floor of the level, whereas another might create a ring of points at a radius of 10 m.
- **Generator origin**: The location around which we want to run the generator—for example, the center of the grid or ring of points that are created. Most often, the generator origin is either the agent itself or some target that it is interacting with.
- **Test**: Measures the value of a sample point, or defines an acceptance condition for it. For example, the sample’s distance from the agent can be a measure of value, while its visibility to the agent’s target can serve as an acceptance condition.
- **Test subject**: A location, object, or list of locations/objects that serve as the subject of comparison for a test. For example, a distance test might compare each sample point’s location against the querying agent, its destination, the set of nearby enemies, recently discovered traps, etc.
To get an idea how these components work together, consider a scenario in which we need to implement a typical approach-and-surround behavior for a group of melee enemies (Figure 26.1). Our goal is to get them into attack range quickly while at the same time fanning out in a circle around the player. To accomplish this, we might begin by using a ring generator, using the player as the generator origin to create a set of sample points in range of our target. Next, by using a series of tests measuring the distance from each sample point to the player, the agent, and the agent’s teammates (as test subjects), we can combine their
26.3 Generating Sample Points
The first step in selecting a useful destination for an agent is to generate a set of potentially viable locations to evaluate. When using pregenerated points this is trivial; we typically collect all marker objects in a given range and move on to the ranking phase. For dynamically-generated points, things are more complex as the generation method itself can heavily impact the quality of the final result.
26.3.1 Generation on the Navigation Mesh
The simplest method of dynamically generating a set of sample points is to create a localized 2D grid on the surface of the agent’s environment. Although it is possible to use collision raycasts against level geometry to map out the level floor, this is not only computationally expensive, but the generated points may not be reachable by the agent (e.g., if they lie on a steep slope or narrow corridor). By sampling along the surface of the navigation mesh instead of the actual level geometry, we can both reduce generation cost and ensure that the sample position is reachable by the agent.
However, the overhead of finding the navmesh surface for a large number of sample points can still be significant. To be practical at runtime, we can further minimize generation cost by localizing our projection test to a limited set of navmesh polygons that match...
as closely as possible the area to be sampled by the generator. The caveat is that there are multiple valid techniques we can use to define this subset, and the one we choose can significantly affect the outcome of the query. For example, two common approaches are either to gather the set of navmesh polygons within a bounding box centered on the query origin, or to gather the navmesh polygons within a given path distance of the query origin, and then to generate points only on those polygons. The bounding box approach is straightforward to implement, but can generate positions that, measured by path distance, are distant or even unreachable (Figure 26.2a). For behaviors such as finding ranged attack locations, this can be a good fit. Using path distance on the other hand ensures that the origin is reachable from all positions, but ignores locations that are spatially indirect, even if they are physically close (Figure 26.2b). Thus the bounding box approach may work better for behaviors that only require line-of-sight (such as ranged attacks), whereas the path distance method is preferable for behaviors dependent on spatial distance, such as following or surrounding.
Other options exist as well. For instance, we can merge both techniques, relaxing the path distance requirement to find reachable points within a given radius even when the path to that location is long and indirect. For example, given a radius $r$, we can gather all navmesh polygons within some multiple of that radius (say, $2r$). Then, during generation, we can eliminate sample points with a linear distance greater than $r$, giving us better coverage over an area while still ensuring a path to the generator origin.
After we have selected the most appropriate method for gathering navmesh polygons, we have a few different methods for generating samples that will impact the effectiveness of our final query:
1. **One-to-one mapping**: Some common navigation libraries, such as Recast/Detour, provide functionality to find the nearest point on the navmesh, given a point and bounding box. We can thus run a search over the gathered polygons at each $(x, y)$ position on the grid, with some reasonably large $z$ value, to verify that a point lies on the section of the navmesh gathered in the previous step. Although efficient, a weakness of this technique is that if your environment has vertically overlapping areas, such as a multi-floored building or bridges, only one level will be discovered (Figure 26.2c).
2. **One-to-many mapping**: A second technique is to use a vertical navigation ray-cast over the gathered polygons at each \((x, y)\) position, generating multiple hits along the \(z\) axis whenever we pass through a gathered navmesh polygon. Here, we trade efficiency for accuracy, handling multi-level terrain at the cost of some performance.
### 26.3.2 Generation Structure
Grids are not the only way to arrange our generated sample points. A custom generator can produce items along walls, arranged in rings, hexes, along waypoint graphs, inside Voronoi cells, or countless other configurations depending on the situation. This decision is important; a poor layout can introduce bias into your query, causing agents to cluster around or avoid certain locations. For tests that are intended to create a smooth scoring gradient, such as distance from a target, it is immediately noticeable when this distribution becomes uneven as agents will begin to approach targets only from specific directions, or settle into locations at specific intervals from the target.
For example, consider a query that wishes to find a location that is as close to an agent’s target as possible, while leaving a 3 m buffer zone around the target. With a grid-based approach, we can first generate a set of sample points around the target, discard those closer than 3 m away, and rank the rest based on their distance from the target. Unfortunately, this exposes a problem, as illustrated in Figure 26.3. Depending on the desired radius, the closest points to the target invariably lie either on the diagonal or cardinal directions. As a result, agents not only cluster around four points, they may also approach the target at an unnatural angle to do so—that is, instead of moving directly toward the target to a point that is 3 m away, they will veer to one side or the other to get to one of the “optimal” points found by the search. In addition, selecting a grid layout for a circular query is intensely inefficient; a large portion of the sample points will either be too close (and thus

---
**Figure 26.3**
(a, b) Examples of diagonal and cardinal distance bias introduced by range tests over grid-generated points. (c) Elimination of distance bias by using a ring generator. Darker circles indicate higher priority positions. Empty circles indicate points that were generated but discarded, whereas light circles indicate positions that were ranked but have no possibility of being used.
immediately discarded by the distance test) or too far (and thus will never be selected, because a closer valid point exists).
In this instance, we can replace the grid generator with a ring generator, eliminating distance bias by guaranteeing that all points closest to the target are the same distance from the target. In addition, we gain an efficiency boost, as we need only generate a fraction of the sample points to perform the same test.
In our projects, this category of query was by far the most common. By changing approach/surround queries to use ring generators, agents selected more natural destinations, and the improved efficiency allowed us to enhance these queries with more complex sets of tests.
26.4 Testing Techniques and Test Subjects
Tests are the building blocks that allow complex reasoning about the environment, and are thus the most crucial components of a query system. Although projects invariably require some domain-specific tests, knowing how to combine and reuse simple, generic tests to produce complex results is the key to rapid development. For example, by only mixing and matching the two most versatile tests in a query system’s toolkit, distance and dot product, we can support a surprisingly wide range of tasks beyond the common but simple “move within X meters of target Y” or “find the closest cover point between myself and the target” behaviors. Section 26.8 provides several practical examples of queries built with these two tests.
26.4.1 Single versus Multiple Test Subjects
Some query systems, such as EQS, allow a test to be run against multiple reference locations. By preparing specific concepts such as “all nearby allies,” “all nearby hostiles,” or “all agent destinations,” we can add tests to our queries to improve the final result.
For example, a minimum distance test (Section 26.4.3) weighted against both ally locations and ally destinations can prevent agents from attempting to move not only into currently occupied locations, but also into locations that will be occupied in the near future. For agents that do not require advanced coordinated tactical movement, this single powerful addition can eliminate most location contention without the need to implement specific countermeasures such as point reservation systems.
26.4.2 Distance Test Scoring
When performing a test against multiple test subjects, we have a choice to make: What score do we keep? For example, is it better to record the shortest distance or the average? As shown in Figure 26.4, each can be used to express a different concept. Minimum distance helps us create local attraction or avoidance around the test subjects; this allows us to keep our distance from any area of the map occupied by a test subject. Conversely, average distance gives us the centroid of the subjects, useful for enforcing team cohesion by prioritizing samples within a specific distance from the centroid.
26.4.3 Dot Product Test Techniques
Within our project, this test is the most frequently used after the distance test, and these two as a pair have been used to express more movement concepts than all other tests combined. The dot product test measures the angle between two directions, allowing us to, for example, prioritize sample points in front of or behind an agent. By choosing our test subjects carefully, we can handle virtually any direction-related weighting by stacking dot product tests. Some examples:
- Testing the vector from the agent to a sample point against the agent’s orientation allows us to prioritize locations in front of (or behind) the agent.
- Similarly, testing points against the agent’s right vector instead of its orientation (forward vector) lets us prioritize locations to its right or left.
- We can use one of the tests above to prioritize a heading in front of, behind, to the left or to the right of the agent. However, using both together will prioritize the area where they overlap, allowing us to represent a diagonal heading instead.
- Testing against the world forward and right vectors, instead of the agent’s, can give us prioritization along cardinal or ordinal directions.
- Using a target as the test subject, rather than the agent, gives us the ability to position ourselves in a specific direction relative to that target—for instance, to stand in front of an NPC vendor, to attack an armored enemy from behind, or to walk alongside the player in formation.
- For even more flexibility, we can accept an optional orientation offset in the dot product test itself: By applying a user-defined angle to the set of forward vectors above, we can prioritize points in any direction, not just the cardinal and ordinals.
- By defining both directions as vectors between two subjects, rather than the orientation of the subjects themselves, we can go even further:
- Comparing the vector from the agent to a sample point against the vector from the sample point to an agent’s target prioritizes locations between the agent and the target. This provides us with locations that get us closer to the target from our current position, ranked by the directness of that approach.
• By using a sine scoring function (Section 26.5.1) over the same vectors, we prioritize locations where the dot product value approaches zero, generating destinations ranked by indirectness. While still approaching the target, these locations allow us to do so in a curved, flanking manner.
• Flipping the direction of the first vector (i.e., using the vector from a sample point to the agent instead of the agent to a sample point) reverses the prioritization, providing retreat suggestions ranked by directness away from the target (Figure 26.5).
We can even apply these concepts beyond actors in the scene. For example, ranking sample points based on the dot product of the vector from the camera to a sample point against the camera’s orientation provides us with locations near the center of the screen (though potentially obstructed). Used with a minimum threshold and low weight, this can provide encouragement for buddy AI characters or other agents we want the player to see as much as possible.
26.4.4 Subject Floor Position
In action games where the player can fly, jump, or climb walls, an agent’s target can easily become separated from the navmesh. When used as a generator origin, this results in the entire query failing, as there is no navmesh at the origin location to generate sample points around. On our project, we used two techniques to resolve this issue:
1. We provided a “Target Floor” test subject to supplement Target (the default). This modified version projected the agent’s position down to the navmesh floor, if present.
2. We provided a “Closest Navmesh Point to Target” test subject, which scanned the immediate area when the target was off mesh.
Both of these techniques allowed agents to find a suitable location to approach the player when jumping or performing off-mesh traversal. For ground-based enemies, this solution was robust enough to become the default test subject used for engaging the player.

(a) Approach locations prioritized by directness. (b) Approach locations prioritized by indirectness. (c) Retreat locations prioritized by directness.
26.5 Test Scoring Functions
Once all sample points have been scored, they must be normalized and ranked. Most commonly, we use this value as-is, inverting the priority when needed with a negative test weight. However, as the final post-processing stage in a test’s evaluation, we can pass the normalized score of each sample point to a scoring function to transform its value. Doing so allows us to increase or decrease the influence of certain samples, adding precision to our test’s intent, or transforming the concept it measures entirely.
- **Linear scoring** (Figure 26.6a) is the backbone of most tests, returning the value of normalized test scores exactly as they were passed in.
- **Square scoring** (Figure 26.6b) strongly deemphasizes all but the highest ranked samples in the test. Useful when we want emphasis to drop off rapidly.
- **Square root scoring** (Figure 26.6c) does the opposite; overemphasizing all but the lowest-ranked samples in the test.
- **Sine scoring** (Figure 26.6d) differs from other methods, in that it emphasizes mid-range values, and de-emphasizes both the highest- and lowest-ranked sample points.
- Where scoring functions describe the rate at which emphasis should change, a test’s weight determines the direction of change. When a test’s weight is negative, an increase in score is replaced with a corresponding decrease, inverting the scoring curve (Figure 26.6b and e, c and f).
Queries typically require several tests to express a useful concept. In these cases, the highest ranked location will almost always represent a compromise between multiple competing goals. The role of scoring equations is to allow each test to define how tolerant it is of suboptimal locations, and how quickly that tolerance changes. In conjunction with the test weight, this lets us define how that compromise should be met.
For example, if we want an agent that steps away from others as its personal space is encroached, how should we express its level of discomfort? We might approximate it using

*Figure 26.6
Normalized result of a distance test from sample points to an agent’s target, after applying linear (a), square (b), square root (c), sine (d) scoring functions. Response curves become inverted when a negative scoring weight is used, as shown in (b) and (e), and (c) and (f), respectively. Darker shades indicate better locations.*
two tests: A distance test against our current location, expressing our desire to move as little as possible, and a second distance test, negatively weighted against other actors in the scene, expressing our desire to move as far away from them as possible. The balance of these two tests determines when the agent will react. For example, if we use square scoring with a negative weight on the second test (Figure 26.6e), in general other actors will have little effect on the agent’s evaluation of its current location, but when approached extremely closely its desire to stay in its current location will be outweighed by its desire to avoid others and it will try to find a slightly less crowded position. Alternatively, if we instead use square root scoring with a negative weight (Figure 26.6f) then even the influence of distant actors will quickly become overwhelming, creating a nervous agent with a strong desire to keep far away from anyone in the area.
The advantage to expressing satisfaction with scoring functions is that it allows us to produce a dynamic, natural response that is not easily expressed by the hard edges of an acceptance condition. If, instead of measuring satisfaction, we simply invalidated all locations within 2 m of another actor, our agent’s response becomes predictable and artificial. However, by defining a level of comfort for all sample points in the test, our response can change along with the environment. For example, when entering a quiet subway car the agent in our scoring equation example will naturally maintain a polite distance from other passengers, but will gradually permit that distance to shrink as it becomes packed at rush hour, continuously adjusting its reaction as the environment becomes more or less crowded.
26.5.1 Sine Scoring Techniques
Although square, square root, and other monotonic scoring functions can be used to tune test results by compressing or expanding the range of suitable positions, sine scoring gives us an opportunity to use existing tests in altogether new ways. For example, applied to a distance test with a minimum and maximum range, we can define a specific ideal radius to approach a target—the average of the two ranges—while still accepting positions closer or further from the target, but with reduced priority.
When applied to the dot product, we have even more options:
- When used against the agent’s orientation, we can express preference for positions to both the left and right, or both forward and behind with a negative weight.
- If we use the absolute value of the dot product with an agent’s orientation, this produces the same result. However, when both are combined, we can now represent preference for either the cardinal or intermediate directions.
- As described in Section 26.4.3, applied to the dot product \((\text{Agent}\rightarrow\text{Sample Point})\cdot(\text{Sample Point}\rightarrow\text{Target})\), we can create a circle between the agent and the target, representing a roundabout approach.
There are many other instances where the most interesting samples are those that lie in the mid-range of a test’s scoring function; sine scoring is the key to discovering them!
26.6 Continuous versus Sequential Updates
Most queries are designed to be executed once, at the start of a behavior, to provide the agent with a suitable destination for its current goal (Figure 26.7a). To adapt to changing world conditions, such as a cover point becoming exposed, it is common to periodically run a validation test on the agent’s destination while en route, but for efficiency we typically do not execute another full query until after we have arrived. In some cases, however, it is worth the expense to update continuously, periodically reexecuting the original query without waiting to arrive, and thus generating new recommendations as world conditions change (Figure 26.7b). Not only does this allow us to react more dynamically, it opens the door to new types of query-based behaviors that previously could only be expressed in code. Common concepts like surrounding, orbiting, zig-zag approaches and random walks can all be expressed as a single, repeatedly executed query without any programming required.
26.6.1 Continuous Query-Based Behaviors
By periodically rerunning the same query, providing frequent updates to the agent’s destination, we can create the illusion of sophisticated navigation or decision-making. For example, as shown in Figure 26.8, by generating a ring of points on the navigation mesh around the agent’s target, then simply prioritizing samples a few meters away as well as those in front of our current position, an agent will begin to circle-strafe around the target, avoiding obstacles as it moves and even reversing direction when it becomes stuck.
Traditional positioning can be enhanced by this technique as well. For example, when approaching a target as part of a group, not only can we maintain ideal distance from the target as it moves, but by negatively weighting the area around the agent’s teammates the group can dynamically reposition themselves in relation to each other, creating a natural and responsive surround behavior (Figure 26.9).
26.6.2 Continuous Querying versus Destination Validation
While promising, there are caveats to this method. Compared to destination validation, continuous querying is responsive and can produce high-quality results, but is also

Behavior tree implementation of a sequential query-based behavior (a) versus a continuous query-based behavior (b).
computationally expensive. If too many agents in the scene are issuing too many queries, you can easily burn through your AI’s CPU budget. It is also more challenging to avoid degenerate behavior: agents becoming stuck in local minima, oscillating between destinations unnaturally, or moving in a stop-and-go fashion by selecting destinations too close to their current position. Nevertheless, the benefits can be substantial and are well worth consideration.
26.7 Reducing Query Failure
Using the techniques thus far, we have been able to reason about the ideal layout for generated points, apply tests on single or multiple subjects, and adjust their scoring based on our needs. In theory, this should be enough to produce high-quality results from a spatial
---
Figure 26.8
Orbiting a target with a continuously updated query (a). As positions in front of the agent become increasingly unsuitable, positions behind the agent gradually gain utility (b), ultimately causing the agent to automatically reverse direction when it can no longer proceed (c).
Figure 26.9
A group of agents approach and surround a target in sequence. In this example, agents prefer locations that are near the target, as well as along the vector between the agent and the target (a), but negatively weight locations near other agents to prevent clustering (b, c, and d). This produces an organic surround behavior that maintains formation continuously as the target moves, and adapts naturally as the number of agents increases or decreases.
Reducing Query Failure
For example, the player may be in a narrow corridor, causing all samples in our flanking query to fail, or they may be facing a wall, making a dramatic surround from the front impossible. A query is useless if it never works and, unfortunately, it is common to design one that fails too easily in unexpected circumstances like these. Fortunately, by making some simple adjustments, a brittle query can be adapted to provide graceful degradation of position quality in unfavorable conditions. In this section, we show how a query can be modified to give the AI the ability to execute a behavior in a wider range of conditions while still returning the ideal result when possible, making them resilient to the complexities of a modern game environment (Figure 26.10).
26.7.1 Increasing Permissiveness
The first action we can take to make a brittle query more failure-resistant is to make it more permissive. That is, we can relax our success conditions so that we have more sample points that can serve as a destination, but use tests to give them a lower final rank so that they are only selected when the ideal conditions are unavailable. In Figure 26.10, we have an example query that attempts to find an attack position in front of the agent’s target. If it is acceptable to attack from behind, but not preferred, we can add additional sample points around the target, but weight them with a dot product test so that the samples behind the target receive a low rank. Done this way, agents will still approach the player from the front, unless it is impossible due to the player’s location (near the edge of a cliff, facing a wall, etc.).
26.7.2 Increasing Robustness
The next action we can take is to make the query more robust. In this case, we relax our concept of the ideal case entirely, providing a larger pool of sample points to draw from under ideal circumstances. For example, our query may specify a position 5 m away and up to 30° from the front of the target, but in reality it may be the case that it will actually work fine at any distance between 5 and 8 m away, and up to 45° in front of the target. Figure 26.10 also shows an example of this solution.
Figure 26.10
Reducing query failure of a strict, brittle query (a) by increasing robustness (b), permissiveness (c), or both (d). Whenever possible, strategies (c) and (d) will return the same result as the original query.
26.7.3 Fallback Queries
Some query systems, such as EQS, provide multiple query options, or strategies, that can be defined as part of a single query. These can be thought of as fallback queries, providing alternative location suggestions to consider if the initial query fails. Thus if the initial option has no usable samples, subsequent ones are executed in order until one succeeds. Only when all options fail does the query itself fail. Clever use of fallback queries can also create opportunities for optimizing high-quality behavior: By defining a narrow initial sample set, we can run more expensive tests in the primary option that would normally be cost prohibitive, such as collision raycasts. In the fallback query, with a wider range of sample points, we can remove these tests to find a more mediocre, but still acceptable, location.
26.7.4 Preserving Quality
Taking advantage of these techniques, we can adapt our queries to be both permissive and robust, while still returning the same results as the initial query when possible. In Figure 26.9, the result of the rightmost query can be achieved in two ways:
1. Combine permissive and robust testing strategies: Pair a dot product gradient ranking with a larger valid sample area, then further add a distance test to weight the closest points higher. This layering results in the original set of points receiving the highest rank whenever they are available.
2. Define the leftmost query as the initial query strategy; if this fails, execute a fallback query that combines the permissive and robust testing strategies. This has the benefit of only incurring additional cost when ideal conditions are unavailable, at the expense of a higher overall cost of running both queries in suboptimal conditions.
26.8 Example Behaviors
In this section, we provide a handful of the many behaviors possible using only the most basic tests and the continuous movement behavior tree in Figure 26.7. For each behavior listed below, the only difference in the agent’s AI is the query itself.
26.8.1 Directed Random Walk
By selecting a random item from the set of sample points, instead of the one that has the highest rank, we can add variety and unpredictability to a query-based behavior. For example, the classic NPC random walk (moving a short distance in an arbitrary direction, stopping briefly between moves) can be represented as a query by generating a filled circle of points up to a specified radius, ranking them by sine-scored distance to define an ideal move distance, and finally selecting randomly among the top 25% highest ranked points. By adding a dot product test to favor the agent’s current direction, we eliminate harsh turns and unnatural oscillations, creating an agent that takes a long winding path through its environment. Finally, a minimum distance test weighted against other agents keeps agents evenly distributed and avoids collisions in crowded environments. By including our current position in the set of generated points, an agent that is boxed in can choose not to move until a reasonable location becomes available (Table 26.1).
26.8.2 Stay on Camera
If we want to position agents where they can be seen by the player, we can use the location and orientation of the camera to prioritize sample points based on their distance from the center of the screen. A camera frustum test can be approximated with a dot product, using a lower clamp to define the angle of a view cone. A negatively weighted distance test relative to the agent ensures the agent moves as little as possible to stay in view. Optionally, adding a raycast test between each point and the camera will eliminate points in front of the camera but hidden behind other objects or scenery, improving behavior quality at the cost of performance (Table 26.2).
26.8.3 Orbit
To walk or run in a circle around a target, we generate a ring of points within the minimum and maximum acceptable radius, then use a set of tests that when combined generate forward movement around the ring. The first, a sine-scored distance test around the agent, defines an ideal movement distance a few meters away from its current position; far enough that we should not arrive before reexecuting the query, but close enough to ensure small, smooth adjustments in heading. Next, a dot product test prioritizes items in the direction that the agent is currently heading, which encourages stable forward movement along the ring (clockwise or counter-clockwise). A second sine-ranked dot product test prioritizes points on the tangent line from the agent to the ring. This serves two purposes: It directs the agent to approach the ring along the tangent line (rather than head on, then turning 90° to begin orbiting), and it strongly prioritizes items directly in front of and behind the agent, allowing the agent to reverse direction when blocked while further stabilizing forward movement. Finally, clamped minimum distance tests around the positions and current destinations of other agents provide local avoidance (Table 26.3).
| Table 26.1 Directed Random Walk: Moves Relatively Forward, Avoiding Others |
|-----------------------------|-----------------------------|-----------------------------|-----------------------------|
| Weight | Type | Parameters | Scoring |
| N/A | Ring generator | 0–8 m around agent | |
| 1 | Distance | Relative to agent | Sine |
| 1 | Dot | (Agent→Sample)·(Agent rotation) | Sigmoid |
| 1 | Minimum distance | Relative to other agents | Linear, 2–10 m range |
| Table 26.2 Stay on Camera: Agent Attempts to Stay on Screen While Moving as Little as Possible |
|-----------------------------|-----------------------------|-----------------------------|-----------------------------|
| Weight | Type | Parameters | Scoring |
| N/A | Grid generator | 20 m around agent | |
| –1 | Distance | Relative to agent | Linear |
| 1 | Dot | (Camera→Sample)·(Camera rotation) | Linear, 0.85–1.0 range |
26.8.4 Boids
Spatial queries can even represent artificial life simulations. Craig Reynolds’s historic boids program, simulating the flocking behavior of birds, produces complex, emergent group behavior from an unexpectedly simple set of rules (Reynolds 1987). By implementing these rules using spatial query tests, we can recreate the original boids simulation as a continuous query behavior. In the original SIGGRAPH paper, individual boid movement was produced by combining the influence of three separate rules:
- Separation, to avoid crowding
- Alignment, to coordinate the flock direction
- Cohesion, to prevent the flock from dispersing
Within a query, separation can be represented as a minimum distance test against other agents, alignment as a dot product test against the average heading of the group, and cohesion as a distance test against the group centroid. By tuning the weights and ranges of these tests, we can adjust the emergent properties of the behavior (Table 26.4).
### Table 26.3 Orbit: Moves in a Circle around a Target Avoiding Others
<table>
<thead>
<tr>
<th>Weight</th>
<th>Type</th>
<th>Parameters</th>
<th>Scoring</th>
</tr>
</thead>
<tbody>
<tr>
<td>N/A</td>
<td>Ring generator</td>
<td>5–9 m around target</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>Distance</td>
<td>Relative to agent</td>
<td>Sine, 0–6 m range</td>
</tr>
<tr>
<td>4</td>
<td>Dot</td>
<td>(Agent→Sample)</td>
<td>Linear</td>
</tr>
<tr>
<td></td>
<td></td>
<td>(Agent→Destination)</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>Dot</td>
<td>(Agent→Sample)·(Agent→Target)</td>
<td>Sine</td>
</tr>
<tr>
<td>1</td>
<td>Minimum distance</td>
<td>Relative to other agents</td>
<td>Sigmoid, 0–5 m range</td>
</tr>
<tr>
<td>1</td>
<td>Minimum distance</td>
<td>Relative to other agent destinations</td>
<td>Sigmoid, 0–5 m range</td>
</tr>
</tbody>
</table>
*Note:* If forward movement is obstructed by the environment or other agents, the agent will turn around and continue orbiting in the opposite direction.
### Table 26.4 Boids: Simulates Boid Flocking Behavior
<table>
<thead>
<tr>
<th>Weight</th>
<th>Type</th>
<th>Parameters</th>
<th>Scoring</th>
</tr>
</thead>
<tbody>
<tr>
<td>N/A</td>
<td>Ring generator</td>
<td>1–20 m around agent</td>
<td></td>
</tr>
<tr>
<td>1.2</td>
<td>Minimum distance</td>
<td>Relative to other agents</td>
<td>Linear, 0–4 m range</td>
</tr>
<tr>
<td>0.5</td>
<td>Dot</td>
<td>(Agent→Sample)·(Average rotation of other agents)</td>
<td>Linear</td>
</tr>
<tr>
<td>−1</td>
<td>Distance</td>
<td>Relative to other agents</td>
<td>Linear</td>
</tr>
</tbody>
</table>
*Note:* Simulates boid flocking behavior using minimum distance, dot product, and average distance tests to represent separation, alignment, and cohesion respectively.
26.9 Conclusion
Once a novel alternative to traditional techniques, over the past five years spatial query systems have evolved into indispensable tools for AI development. Now commonplace and supported by multiple widely used game engines, integrating spatial query systems into your AI is more practical than ever, providing faster iteration time and higher quality position selection in dynamic and complex environments. By understanding the strengths and weaknesses of each component of a query, we can improve query quality and flexibility over a wider range of environmental conditions. Single queries, executed continuously, can even express traditionally code-driven movement behaviors, making query systems an increasingly versatile tool, able to single-handedly support most, if not all, destination selection in a project.
References
|
{"Source-Url": "http://www.gameaipro.com/GameAIPro3/GameAIPro3_Chapter26_Guide_to_Effective_Auto-Generated_Spatial_Queries.pdf", "len_cl100k_base": 8282, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 36660, "total-output-tokens": 9187, "length": "2e13", "weborganizer": {"__label__adult": 0.001171112060546875, "__label__art_design": 0.00164794921875, "__label__crime_law": 0.0014925003051757812, "__label__education_jobs": 0.00316619873046875, "__label__entertainment": 0.0006728172302246094, "__label__fashion_beauty": 0.0007786750793457031, "__label__finance_business": 0.0008745193481445312, "__label__food_dining": 0.0009918212890625, "__label__games": 0.10040283203125, "__label__hardware": 0.0023250579833984375, "__label__health": 0.0010232925415039062, "__label__history": 0.0016126632690429688, "__label__home_hobbies": 0.0002942085266113281, "__label__industrial": 0.00128936767578125, "__label__literature": 0.001216888427734375, "__label__politics": 0.0008821487426757812, "__label__religion": 0.0012979507446289062, "__label__science_tech": 0.154296875, "__label__social_life": 0.00026035308837890625, "__label__software": 0.0240325927734375, "__label__software_dev": 0.69677734375, "__label__sports_fitness": 0.0015125274658203125, "__label__transportation": 0.0014581680297851562, "__label__travel": 0.0006384849548339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40947, 0.04697]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40947, 0.61722]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40947, 0.91948]], "google_gemma-3-12b-it_contains_pii": [[0, 1338, false], [1338, 4711, null], [4711, 6058, null], [6058, 8566, null], [8566, 11071, null], [11071, 14001, null], [14001, 16218, null], [16218, 18343, null], [18343, 20765, null], [20765, 23961, null], [23961, 26345, null], [26345, 27870, null], [27870, 30293, null], [30293, 33420, null], [33420, 36590, null], [36590, 39291, null], [39291, 40947, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1338, true], [1338, 4711, null], [4711, 6058, null], [6058, 8566, null], [8566, 11071, null], [11071, 14001, null], [14001, 16218, null], [16218, 18343, null], [18343, 20765, null], [20765, 23961, null], [23961, 26345, null], [26345, 27870, null], [27870, 30293, null], [30293, 33420, null], [33420, 36590, null], [36590, 39291, null], [39291, 40947, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40947, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40947, null]], "pdf_page_numbers": [[0, 1338, 1], [1338, 4711, 2], [4711, 6058, 3], [6058, 8566, 4], [8566, 11071, 5], [11071, 14001, 6], [14001, 16218, 7], [16218, 18343, 8], [18343, 20765, 9], [20765, 23961, 10], [23961, 26345, 11], [26345, 27870, 12], [27870, 30293, 13], [30293, 33420, 14], [33420, 36590, 15], [36590, 39291, 16], [39291, 40947, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40947, 0.16279]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
d97c08eb4701e30d34e5d0996e1e1bc7d13234e5
|
1983
The String-to-String Correction Problem with Block Moves
Walter F. Tichy
Report Number: 83-459
The String-to-String Correction Problem with Block Moves
Walter F. Tichy
Purdue University
Department of Computer Science
West Lafayette, IN 47907
CSD-TR 459
ABSTRACT
The string-to-string correction problem is to find a minimal sequence of edit operations for changing a given string into another given string. Extant algorithms compute a Longest Common Subsequence (LCS) of the two strings and then regard the characters not included in the LCS as the differences. However, an LCS does not necessarily include all possible matches, and therefore does not produce the shortest edit sequence.
We present an algorithm which produces the shortest edit sequence transforming one string into another. The algorithm is optimal in the sense that it generates a minimal, covering set of common substrings of one string with respect to the other.
Two runtime improvements of the basic algorithm are also presented. Runtime and space requirements of the improved algorithms are comparable to LCS algorithms.
Categories and Subject Descriptors: D.2.2 [Software Engineering]: Tools and Techniques—programmer workbench, software libraries; D.2.6 [Software Engineering]: Programming Environments; D.2.7 [Software Engineering]: Distribution and Maintenance—version control
General Terms: Algorithms
Additional Key Words and Phrases: String-to-string correction, block moves, deltas, differences, source control, revision control
October 26, 1983
The String-to-String Correction Problem with Block Moves
Walter F. Tichy
Purdue University
Department of Computer Science
West Lafayette, IN 47907
CSD-TR 459
Introduction
The string-to-string correction problem is to find a minimal sequence of edit operations for changing a given string into another given string. The length of the edit sequence is a measure of the differences between the two strings. Programs for determining differences in this manner are useful in the following situations.
(1) Difference programs help determine how versions of text files differ. For instance, computing the differences between revisions of a software module helps a programmer trace the evolution of the module during maintenance[8], or helps create test cases for exercising changed portions of the module. Another application is the automatic generation of change bars for new editions of manuals and other documents.
(2) Frequently revised documents like programs and graphics are stored most economically as a set of differences relative to a base version[10,12]. Since the changes are usually small and typically occupy less that 10% of the space needed for a complete copy[10], difference techniques can store the equivalent of about 11 revisions in less space than would be required for saving 2 revisions (one original and one backup copy) in cleartext.
(3) Changes to programs and other data are most economically distributed as "update decks" or "deltas", which are edit sequences that transform the old version of a data object into the new one. This approach is used in software distribution. A related application can be found in screen editors and
This work was supported in part by the National Science Foundation under grant MCS-8109513.
graphics packages. These programs update display screens efficiently by computing the difference between the old and new screen contents, and then transmitting only the changes to the display[2].
(4) In genetics, difference algorithms compare long molecules consisting of nucleotides or amino acids. The differences provide a measure of the relationship between types of organisms[11].
Most of the existing programs for computing differences are based on algorithms that determine a Longest Common Subsequence (LCS). An LCS has a simple and elegant definition, and algorithms for computing an LCS have received some attention in the literature[13, 4, 6, 7, 5, 9]. An LCS of two strings is one of the longest subsequences that can be obtained by deleting zero or more symbols from each of the two given strings. For example, the longest common subsequence of *shanghai* and *sakhalin* is *sahai*. Once an LCS has been obtained, all symbols that are not included in it are considered differences. A simultaneous scan of the two strings and the LCS isolates those symbols quickly. For example, the following edit script, based on the LCS *sahai*, would construct the target string *sakhalin* from *shanghai*.
```
M 0,1
M 2,1
A "k"
M 5,2
A "I"
M 7,1
A "n"
```
An edit-command of the form $M p, l$, called a *move*, appends the substring $S[p, \ldots, p + l - 1]$ of source string $S$ to the target string, and an *add* command of the form $A w$ appends the string $w$ to the target string. In the above example, the edit script takes up much more space than the target string, and none of the savings mentioned earlier are realized. In practical cases, however, the common subsequence is not as fragmented, and a single *move* command covers a long substring. In addition, if this technique is applied to text, one usually chooses full text lines rather than single characters as the atomic symbols. Consequently, the storage space required for a *move* is negligible compared to the that of an *add* command, and it is worth minimizing the occurrence of the *add* commands. Note that in the above example, the last *add* command could be replaced with a *move*, since the symbol $n$ appears in both strings.
Unfortunately, the definition of an LCS is such that the $n$ cannot be included in the LCS. The algorithm presented below does not omit such matches.
**Problem Statement**
Given 2 strings $S=S[0,\ldots,n], n \geq 0$ and $T=T[0,\ldots,m], m \geq 0$, a **block move** is a triple $(p,q,l)$ such that $S[p,\ldots,p+l-1] = T[q,\ldots,q+l-1]$ ($0 \leq p \leq n-l+1, 0 \leq q \leq m-l+1, l > 0$). Thus, a block move represents a non-empty, common substring of $S$ and $T$ with length $l$, starting at position $p$ in $S$ and position $q$ in $T$. A **covering set of $T$ with respect to $S$**, denoted by $\delta_S(T)$, is a set of block moves, such that every symbol $T[i]$ that also appears in $S$ is included in exactly one block move. For example, a covering set of $T=\text{abcd}$ with respect to $S=\text{abda}$ is $\{(0,0,2),(0,3,2)\}$. A trivial covering set consists of block moves of length 1, one for each symbol $T[i]$ that appears in $S$.
The problem is to find a **minimal** covering set, $\Delta_S(T)$, such that $|\Delta_S(T)| \leq |\delta_S(T)|$ for all covering sets $\delta_S(T)$. The coverage property of $\Delta_S(T)$ assures that all possible matches are included, and the minimality constraint makes the set of block moves (and therefore the edit script) as small as possible.
Because of the coverage property, it is apparent that $\Delta_S(T)$ includes the LCS of $S$ and $T$. (Consider the concatenation of the substrings $T[q_j,\ldots,q_j+l_j-1]$, where $(p_j,q_j,l_j)$ is a block move of $\Delta_S(T)$, and the substrings are concatenated in order of increasing $q_j$.) The minimality constraint assures that the LCS cannot provide a better "parcelling" of the block moves.
**False Starts**
Before presenting the solution, it is useful to consider several more or less obvious approaches, all of which fail. The first approach is to use the LCS. As we have seen, an LCS has the property of not necessarily generating a covering set of block moves. For example, the following two pairs of strings have the LCS $\text{abc}$, which does not include the (moved) common substring $\text{de}$ nor the (repeated)
common substring \(abc\). The LCS match is shown on the left, \(\Delta_3(T)\) on the right.
\[
\begin{align*}
S &= \text{abcde} & S &= \text{abcde} \\
T &= \text{deabc} & T &= \text{deabc} \\
S &= \text{abc} & S &= \text{abc} \\
T &= \text{abcabc} & T &= \text{abcabc}
\end{align*}
\]
Heckel[3] pointed out similar problems with LCS techniques and proposed a linear-time algorithm to detect block moves. The algorithm performs adequately if there are few duplicate symbols in the strings. However, the algorithm gives poor results otherwise. For example, given the two strings \(aab\) and \(bb\). Heckel’s algorithm fails to discover any common substring.
An improvement of the LCS approach is to apply the LCS extraction iteratively. For instance, after finding the initial LCS in the above examples, one could remove it from the target string \(T\) and recompute the LCS. This process is repeated until only an LCS of length 0 remains. The iterative LCS strategy succeeds in finding a covering set, but not necessarily the minimal one. The following example illustrates.
\[
\begin{align*}
S &= \text{abcdea} & S &= \text{abcdea} \\
T &= \text{cdab} & T &= \text{cdab}
\end{align*}
\]
Assuming again that \(S\) is the source string and \(T\) is the target string, the left diagram shows the match obtained via an iterative LCS algorithm. The first LCS is \(cda\), the second one is \(b\). Since \(cda\) is not a substring of \(S\), we obtain a total of 3 block moves. The minimal covering set, shown to the right, consists of 2 block moves.
Another tack is to search for the longest common substring rather than the longest common subsequence*. Computing the longest common substring iteratively results in a covering set, but again not necessarily a minimal one. Con-
* Recall that a subsequence may have gaps, a substring may not.
Consider the following example.
\[ S = \text{abcdefdeab} \quad T = \text{abcdefdeab} \]
The left diagram shows the block moves obtained by searching repeatedly for the longest common substring of \( S \) and \( T \). The result is a set of 3 block moves, although 2 are minimal. Searching for the longest common substring is too "greedy" a method, since it may mask better matches.
**Basic Algorithm**
A surprisingly simple algorithm does the job. Start at the left end of the target string \( T \), and try to find prefixes of \( T \) in \( S \). If no prefix of \( T \) occurs in \( S \), remove the first symbol from \( T \) and start over. If there are prefixes, choose the longest one and record it as a block move. Then remove the matched prefix from \( T \) and try to match a longest prefix of the remaining tail of \( T \), again starting at the beginning of \( S \). This process continues until \( T \) is exhausted. The recorded block moves constitute a \( \Delta_S(T) \), a minimal covering set of block moves of \( T \) with respect to \( S \), as will be shown later. The following example illustrates several steps in the execution of the algorithm. The string to the right of the vertical bar is the unprocessed tail of \( T \).
**Step 1:**
\[ S = uvwuvwxy \]
\[ T = zuvwxwu \]
longest block move starting with \( T[0] \): none
**Step 2:**
\[ S = uvwuvwxy \]
\[ T = zuvwxwu \]
longest block move starting with \( T[1] \): \((3,1,4)\)
**Step 3:**
\[ S = uvwuvwxy \]
\[ T = zuvwxwu \]
longest block move starting with \( T[5] \): \((2,5,2)\)
In step 1, we search for a prefix of \(T[0, \ldots, 6]\) in \(S[0, \ldots, 7]\). Since there is none, we search for a prefix of \(T[1, \ldots, 6]\) in the next step. This time we find 2 matches, and choose the longer one, starting with \(S[4]\). In step 3, we search for a prefix of \(T[5, \ldots, 6]\) in \(S[0, \ldots, 7]\), and find the longest one at \(S[2]\), length 2. Now \(T\) is exhausted and the algorithm stops. Note that in each step we start at the left end of \(S\) in order to consider all possible matches.
The algorithm is presented below. Let us assume that the source string is stored in an array \(S[0, \ldots, m]\), and the target string in \(T[0, \ldots, n]\). \(T[q]\) is the first symbol of the unmatched tail of \(T\); \(q\) is initially zero. The first refinement of the algorithm is now as follows.
\[
q := 0;
\]
\[
\text{while } q \leq n \text{ do}
\]
\[
\text{begin}
\]
\[
1: \text{find } p \text{ and } l \text{ such that } (p, q, l) \text{ is a maximal block move}
\]
\[
\text{if } l > 0 \text{ then print}(p, q, l);
\]
\[
q := q + \text{Max}(l, 1)
\]
\[
\text{end}
\]
Implementing the statement labelled \(L\) is simple. Search \(S\) from left to right for a longest possible prefix of \(T[q, \ldots, n]\). Note that the search can terminate as soon as there are fewer than \(l + 1\) symbols left in \(S\), assuming that \(l\) is the length of the maximal block move found in the current iteration. Similarly, there is no possibility of finding a longer block move if the last one included \(T[n]\). (We use \textbf{and then} as the conditional logical AND operator.)
\[
L:
\]
\[
l := 0; \quad p := 0; \quad \text{pCur} := 0;
\]
\[
\text{while } \text{pCur} + 1 \leq m \text{ and } q + 1 \leq n \text{ do}
\]
\[
\text{begin } \{ \text{Determine length of match between } S[\text{pCur}, \ldots] \text{ and } T[q, \ldots] \}\n\]
\[
\text{lCur} := 0;
\]
\[
\text{while } (\text{pCur} + \text{lCur} \leq m) \text{ and } (q + \text{lCur} \leq n)
\]
\[
\text{and then } (S[pCur + \text{lCur}] = T[q + \text{lCur}])
\]
\[
\text{do } \text{lCur} := \text{lCur} + 1;
\]
\[
\text{if } \text{lCur} > l \text{ then}
\]
\[
\begin{array}{l}
\text{begin } \{ \text{new maximum found} \}\n\end{array}
\]
\[
l := \text{lCur}; \quad p := \text{pCur}
\]
\[
\text{end}
\]
\[
\text{pCur} := \text{pCur} + 1
\]
\[
\text{end}
\]
The runtime of this algorithm is bounded by \(mn\), and the space requirements are \(m + n\). We now show that this algorithm finds \(A_5(T)\). Clearly, the set of block moves printed is a covering set, because each symbol in \(T\) that is not
included in some block move is (unsuccessfully) matched against each symbol in $S$. To see that the covering set is minimal, consider $T$ below, with the matching produced by our algorithm denoted as follows. Substrings included in a block move are bracketed by "(" and ")". Substrings of symbols excluded from any block move are denoted by $X$.
\[ \cdots X ( \cdots ) X ( \cdots )( \cdots ) X ( \cdots )( \cdots )( \cdots ) X \cdots \]
Suppose there is a $\delta^*_S(T)$ with fewer block moves than the set generated by our algorithm. Clearly, the substrings denoted by $X$ cannot be part of $\delta^*_S(T)$, because our algorithm does produce a covering set. We can therefore exclude all unmatched substrings from consideration, and concentrate on individual sequences of contiguous block moves.
Now consider block moves that are contiguous in $T$. The only way to obtain a smaller covering set is to find a sequence of $k > 1$ contiguous block moves and to "reparcel" them into a covering set of fewer moves. We will show by induction on the number of contiguous block moves that the set produced by our algorithm is minimal.
Suppose we have $k \geq 1$ contiguous block moves generated by our algorithm. This means that we have $k$ triples $(p_i, q_i, l_i)$, $(1 \leq i \leq k)$ satisfying the following conditions.
\[
\begin{align*}
A1:1 \leq i \leq k & \quad T[q_i, \ldots, q_i + l_i - 1] = S[p_i, \ldots, p_i + l_i - 1] \quad (*) \\
A2:1 \leq i \leq k & \quad \text{Ap:} 0 \leq p \leq m - l_i, \quad T[q_i, \ldots, q_i + l_i] \neq S[p, \ldots, p + l_i] \quad (**) \\
A3:1 \leq i < k & \quad T[q_i + l_i] = T[q_{i+1}] \quad (***)
\end{align*}
\]
The first condition is just the definition of a block move. The second condition assures that each block move starting at $T[q_i]$ is maximal. The third condition means that the block moves are contiguous in $T$.
We need to show that for any set of of $k$ block moves satisfying (*) to (**), any equivalent set has at least $k$ block moves. Actually, it is convenient to prove something slightly more general: For any set of $k$ block moves satisfying (*) to (**), any set which covers the first $k - 1$ block moves and a non-empty prefix of block move $k$ has at least $k$ block moves. First, assume $k = 1$. Clearly, we cannot split any non-empty prefix of a single block move into less than 1 covering block move. Now assume that $k > 1$, and that all sets covering the first $k - 2$ block
moves and any non-empty prefix of block move \( k - 1 \) consist of at least \( k - 1 \) block moves. Consider what we can do with non-empty prefixes of the \( k \)'th block move. There are two cases. The first case applies to sets that cover the original block move \( k - 1 \) with a single move \( B \). In this case, let \( B = (p_b, q_b, t_b) \), where \( p_b \leq p_{k-1} \), and \( p_b + t_b = p_{k-1} + t_{k-1} \). By the induction hypothesis, \( B \) is at least the \( k - 1 \)st move in the equivalent set. It is impossible to append a non-empty prefix of move \( k \) to \( B \) since that would contradict (**). Thus we need at least \( k \) moves for covering the original \( k - 1 \) moves and any non-empty prefix of move \( k \).
The second case applies to sets that split the original block move \( k - 1 \) into at least 2 non-empty moves (see the diagram below).
<table>
<thead>
<tr>
<th>orig. block move no.</th>
<th>k-2</th>
<th>k-1</th>
<th>k</th>
</tr>
</thead>
<tbody>
<tr>
<td>orig. set</td>
<td>..... ) ( ...</td>
<td>( ... ) ( ... )</td>
<td></td>
</tr>
<tr>
<td>( \delta'_{S}(T) ) covering k-1</td>
<td>......</td>
<td>( ... )</td>
<td></td>
</tr>
<tr>
<td>( \delta''_{S}(T) ) covering k</td>
<td>......</td>
<td>( ... ) ( ... ) ( ... )</td>
<td></td>
</tr>
</tbody>
</table>
The only choice to reduce the number of block moves below \( k \) is to coalesce the suffix of the original move \( k - 1 \) with a non-empty prefix of move \( k \). This new parcelling leaves us with (a) a set covering the original \( k - 2 \) block moves and a non-empty prefix of block move \( k - 1 \), (b) a new coalesced move covering a suffix of move \( k - 1 \) and a prefix of \( k \), and (c) another block move if the suffix of move \( k \) is not empty. By the induction hypothesis, we know that (a) has at least \( k - 1 \) moves. Add to that the (non-empty) coalesced move, and we end up with at least \( k \) moves for covering the first \( k - 1 \) block moves and any non-empty prefix of move \( k \). Thus, any set equivalent to the block moves generated by our algorithm has at least \( k \) elements. QED.
First Improvement of the Basic Algorithm
Consider a situation where the source string \( S \) has few replicated symbols. That is, \( \alpha \), the size of the alphabet of \( S \), is approximately equal to \( m \). In this case, a significant improvement of the basic algorithm is possible. During a single scan of \( S \), we prepare an index that, for each symbol \( s \) in the alphabet, lists the positions of all occurrences of \( s \) in \( S \). In the basic algorithm, we replace the statement labelled \( \delta' \) with the following. Assume \( T[g] = s \) is the first symbol of the unmatched tail of \( T \). Look up the list \( L \) of occurrences of symbol \( s \) in \( S \).
using the above index. If the list is empty, no match is possible. Otherwise, find
the maximal block move among those starting with the elements of \( l \) in \( S \).
The performance of this algorithm is as follows. Assume the average length
of a block move is \( l \). Then the maximal block move must be selected among
\( m/\alpha \) alternatives, at a cost of not more than \( l+1 \) comparisons each. Thus, the
runtime of the algorithm is \( O(l^{\ast}(m/\alpha)^{\ast}(n/l)) = O(mn/\alpha) \). If \( m \approx n \), we
obtain a nearly linear algorithm.
Program text and prose have the property of few repeated lines. In program
text, the only repeated lines should be empty or consist of bracketing
symbols like \texttt{begin} and \texttt{end}; for all other repetitions one would normally write a
subprogram. In prose text, the only repeated lines should be empty or contain
formatting commands. In applying our algorithm to prose or program text, it is
therefore appropriate to choose lines as the atomic symbols. To speed up com
parisons, the program should use hashcodes for lines of text rather than per
forming character-by-character comparisons.
We implemented a program incorporating these ideas, called \texttt{bdiff}, and
compared it with \texttt{diff[6]}, which uses an LCS algorithm. We executed both pro
grams on 1400 pairs of files. Each pair consisted of 2 successive revisions of
text, deposited in a database maintained by the Revision Control System[12].
This system stores multiple revisions of text files as differences. Almost all of
the sample files contained program text. We observed that \texttt{diff} and \texttt{bdiff} execute with similar speeds, but that \texttt{bdiff} produces deltas that are, on the average,
only about 7% smaller. Apparently, block moves and duplicate lines in program
text are not frequent enough to obtain significant space savings over LCS algo
dithms. We expect that the situation is more advantageous for block moves in
the other applications mentioned in the introduction.
\section*{Second Improvement of the Basic Algorithm}
A different improvement speeds up our basic algorithm even if the source
string contains numerous duplicated symbols. The improvement involves an
adaptation of the Knuth-Morris-Pratt string matching algorithm[8], which allows
a pattern of length \( l \) to be found in a string of length \( m \) in \( O(m+l) \) steps. Thus, if
\( S \) is of length \( m \), \( T \) is of length \( n \), and the average block move is of length \( l \),
our algorithm should operate in \( O((m+l)^{\ast}(n/l)) = O(mn/l) \) steps. Note that
the ratio \( m/l \) is a measure of the "difference" of \( S \) and \( T \), and that the runtime
of the algorithm is proportional to that ratio. Note also that this measure is independent of the permutation of the common substrings in $T$ with respect to $S$.
An important element in the Knuth-Morris-Pratt algorithm is an auxiliary array $N$ which indicates how far to shift a partially matched pattern or block move after a mismatch. The array $N$ is as long as the pattern, and is precomputed before the match. Precomputing $N$ poses a problem for our algorithm. Since we do not know how long a block move is going to be, we would have to precompute $N$ for the entire unprocessed tail of $T$, although we would normally use only a small portion of it. Fortunately, $N$ can also be computed incrementally. The outline of the adapted pattern matching algorithm is as follows.
Assume the next unmatched symbol is $T[q]$. Start by initializing $N[q]$ and apply the Knuth-Morris-Pratt algorithm to find the first occurrence of $T[q]$. (Note that this is a pattern of length 1.) If this pattern cannot be found, there is no block move including $T[q]$. Otherwise, expand the pattern by 1, compute the next entry in $N$, and reapply the Knuth-Morris-Pratt algorithm to find the first occurrence of the expanded pattern. Start the search with the previous match. Continue this process, until the pattern reaches a length for which there is no match. At that point, the previous match is the maximal block move.
Suppose the maximal common block move starting with $T[q]$ is $l$. The last attempted pattern match is therefore of length $l+1$, and fails. The incremental computation of the entries $N[q, \ldots, q+l+1]$ at a total cost proportional to $l$ assures that the cost of the average match remains $O(m+l)$.
The detailed program is given in the appendix. It is useful for applications (3) and (4) mentioned in the introduction. The idea of incrementally computing auxiliary data structures can also be applied to the Boyer-Moore pattern matching algorithm[1], resulting in a program that runs even faster on the average.
**Reconstructing the Target String**
An edit script that reconstructs target string $T$ from source string $S$ is a sequence of move and add commands. The commands build a string $T'$ left-to-right. Each block move $(p,q,l)$ in $\Delta S(T')$ is represented by a command of the form $M(p,l)$, which copies the string $S[p, \ldots, p+l-1]$ to the end of the string $T'$. For any substring $T[u, \ldots, v]$ consisting entirely of symbols that do not occur in $S$, the edit script contains the command $A T'[u, \ldots, v]$, which simply
appends the unmatchable substring to $T$. After completion of all edit commands, $T = T'$.
In general, $T$ cannot be constructed in a single pass over $S$, because block moves may cross (cf. examples in Sect. 3). If $S$ is a sequential file, one can minimize the number of rewind operations caused by crossing block moves as follows. During the generation of the edit script, it does not matter which one of 2 or more equivalent block moves is chosen. For example, suppose we have the following equivalent, maximal block moves starting with $T(q)$: $B_1 = (p_1, q, l)$ and $B_2 = (p_2, q, l)$, with $p_1 < p_2$. If the previous block move emitted had its $S$-endpoint between $S[p_1]$ and $S[p_2]$, choosing the block move $B_2$ saves one rewind operation for $S$. Our algorithms are easily modified to accommodate this idea. Rather than starting at the left end of $S$ while searching for the longest possible match, they must start with the endpoint of the previous match and "wrap around" at the end of $S$.
So far, we have presented our edit scripts as constructing $T$ separately from $S$. It is also possible to transform $S$ "in place". The following paragraphs discusses the algorithm in some detail.
Suppose we have a buffer $B[0, \ldots, Max(m, n)]$ initialized to $S$, i.e., $B[i] = S[i]$ for $0 \leq i \leq n$. The goal is to transform the contents of $B$ to $T$. The key to this algorithm is an auxiliary array $A[0, \ldots, n]$, which keeps track of the positions of the original symbols $S[i]$ in $B$. Initially, $A[i] = i$ for $0 \leq i \leq n$. A marker $h$ moves through $A$ from left to right, giving the index of the rightmost symbol involved in a block move so far. Thus, for the $k$'th move command $M p_k, l_k$, $h = Max(p_j + l_j, 0 \leq j \leq k)$. There is also a marker $t$ indicating the index of the last symbol processed in $B$.
The first step is to remove all symbols from $B$ which are not in $T$. This step preprocesses the edit script to isolate the symbols to be deleted, and then actually removes them from $B$. It also updates the mapping array $A$ to reflect the compression, and marks those entries of $A$ as undefined whose counterparts in $B$ were deleted. The second step processes the edit commands in sequence. An add command simply inserts the given string to the right of $t$, and resets $t$ to point to the last symbol so inserted. It also updates the array $A$ for the symbols shifted right by the insertion. For each move of the form $M p, l$, compare $p$ and the current value of $h$. If $p > h$, then the current block move is to the right of the previous one. The symbols between $h$ and $p$, i.e., $B[A[h+1], \ldots, A[p-1]]$.
are not included in the current move, but will be moved later. Mark them as such and set \( h \) to \( p + l - 1 \) and \( t \) to \( A[h] \). Thus, the characters \( S[p, \ldots, p + l - 1] \) will be included in the result. Otherwise, if \( p \leq h \), the current block move crosses the previous one, and a substring located before \( t \) must be moved or copied forward. All symbols in that string that were marked for moving by an earlier command are now moved, the others are simply copied forward. It is conceivable that the current block move involves symbols to the left and right of \( h \). In that case, first handle the string to the left of \( h \) by moving or copying elements of the string \( B[A[p], \ldots, A[\min(p + l - 1, h)]] \) after \( B[t] \). The remaining (possibly empty) string \( A[h + 1, \ldots, p + l - 1] \) is simply included by setting \( h \) to \( \max(p + l - 1, h) \). Update \( A \) to reflect the moves and shifts, and set \( t \) to \( A[h] \).
Below is a trace of the algorithm, transforming the string Shanghai to Sakhalin by applying the edit script \texttt{M0,1; M2,1; A"k"; M1,2; A"I"; M7,1; M3,1}. The algorithm can be applied to update display screens efficiently, provided the display offers operations for character and line insertion and deletion, as well as a \texttt{copy/move} feature. The latter feature is needed for copying and moving character strings forward in the above algorithm. The auxiliary array \( A \) is allocated in main memory.
After removing unused symbols
After applying
M 0,1; M 2,1
After applying
I "k"
After applying
M 1,2; I "k"
After applying
M 7,1; M 3,1
Conclusions
The original string-to-string correction problem as formulated in [13] permitted the editing commands *add*, *delete*, and *change*. Clearly, a *change* command can be simulated with a *delete* followed by an *add*. Any sequence of *add* and *delete* commands can be transformed into an equivalent sequence of *add* and *move* commands. This transformation works since *delete* and *move* commands complement each other, provided no block moves cross or overlap. Our approach of extending the editing commands by permitting crossing block moves results in shorter edit sequences. We developed efficient algorithms for computing those sequences. Reconstructing the target string by applying the edit sequence is efficient if the source string can be accessed randomly.
Appendix: Using the Knuth-Morris-Pratt Pattern Matching Algorithm.
S: array[0..m] of symbol;
T: array[0..n] of symbol;
N: array[0..n] of symbol;
q := 0; | start at left end of T |
while q <= n do begin | Characters left in T; find longest match starting with T[q] |
k := 0; | start match at left end of S |
j := q; | first symbol of pattern |
last := q; | last symbol of pattern |
N[q] := q-1; | initialize N[q] |
iN := q-1; | initialize computation of N[q+1,...] |
loop | loop with exit from the middle |
| try to find a match for T[q]..T[last] |
| T[q]..T[last-1] has already been matched |
kOld := k; | save last point of old match, if any |
while (j<=last) and (k<=m) do begin
while (j>=q) and (S[k] <> T[j]) do j := N[j];
k := k+1; j := j+1;
end
until (j<=last) || (last=n); | exit from the middle |
| found match; now increase last and compute N[last] |
while (iN>=q) and (T[last] <> T[iN]) do iN := N[iN];
last := last+1; iN := iN+1;
end | end of loop |
| print match |
if j>last then begin | found match for tail of T |
print(k-(n-q+1), q, n-q+1);
q := n+1;
end else if q = last then begin | no match |
q := q+1;
end else begin | last match failed; take previous one |
print(kOld-(last-q), q, last-q)
q := last;
end
References
|
{"Source-Url": "https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1377&context=cstech", "len_cl100k_base": 8215, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42866, "total-output-tokens": 9968, "length": "2e13", "weborganizer": {"__label__adult": 0.0002856254577636719, "__label__art_design": 0.00024318695068359375, "__label__crime_law": 0.0002827644348144531, "__label__education_jobs": 0.0005297660827636719, "__label__entertainment": 6.99162483215332e-05, "__label__fashion_beauty": 0.00012803077697753906, "__label__finance_business": 0.00018477439880371096, "__label__food_dining": 0.00029540061950683594, "__label__games": 0.00049591064453125, "__label__hardware": 0.0010251998901367188, "__label__health": 0.0003817081451416016, "__label__history": 0.00018203258514404297, "__label__home_hobbies": 8.875131607055664e-05, "__label__industrial": 0.0003483295440673828, "__label__literature": 0.0002779960632324219, "__label__politics": 0.00019371509552001953, "__label__religion": 0.0003838539123535156, "__label__science_tech": 0.0308990478515625, "__label__social_life": 7.30752944946289e-05, "__label__software": 0.0093994140625, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.00024819374084472656, "__label__transportation": 0.00031638145446777344, "__label__travel": 0.00014293193817138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32474, 0.02969]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32474, 0.59566]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32474, 0.87575]], "google_gemma-3-12b-it_contains_pii": [[0, 103, false], [103, 1543, null], [1543, 3295, null], [3295, 5504, null], [5504, 7636, null], [7636, 9477, null], [9477, 11047, null], [11047, 13634, null], [13634, 16085, null], [16085, 18762, null], [18762, 21481, null], [21481, 24048, null], [24048, 26734, null], [26734, 28238, null], [28238, 28402, null], [28402, 29183, null], [29183, 30510, null], [30510, 32474, null]], "google_gemma-3-12b-it_is_public_document": [[0, 103, true], [103, 1543, null], [1543, 3295, null], [3295, 5504, null], [5504, 7636, null], [7636, 9477, null], [9477, 11047, null], [11047, 13634, null], [13634, 16085, null], [16085, 18762, null], [18762, 21481, null], [21481, 24048, null], [24048, 26734, null], [26734, 28238, null], [28238, 28402, null], [28402, 29183, null], [29183, 30510, null], [30510, 32474, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32474, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32474, null]], "pdf_page_numbers": [[0, 103, 1], [103, 1543, 2], [1543, 3295, 3], [3295, 5504, 4], [5504, 7636, 5], [7636, 9477, 6], [9477, 11047, 7], [11047, 13634, 8], [13634, 16085, 9], [16085, 18762, 10], [18762, 21481, 11], [21481, 24048, 12], [24048, 26734, 13], [26734, 28238, 14], [28238, 28402, 15], [28402, 29183, 16], [29183, 30510, 17], [30510, 32474, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32474, 0.03147]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0970b44af1fc9a83392551c2394f58a29d69f49d
|
[REMOVED]
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20020063476.pdf", "len_cl100k_base": 10246, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 59415, "total-output-tokens": 12272, "length": "2e13", "weborganizer": {"__label__adult": 0.0003833770751953125, "__label__art_design": 0.0002899169921875, "__label__crime_law": 0.000396728515625, "__label__education_jobs": 0.0009150505065917968, "__label__entertainment": 8.982419967651367e-05, "__label__fashion_beauty": 0.0001780986785888672, "__label__finance_business": 0.0002052783966064453, "__label__food_dining": 0.0004279613494873047, "__label__games": 0.0007495880126953125, "__label__hardware": 0.0013217926025390625, "__label__health": 0.0005426406860351562, "__label__history": 0.00039887428283691406, "__label__home_hobbies": 9.739398956298828e-05, "__label__industrial": 0.0004324913024902344, "__label__literature": 0.0003077983856201172, "__label__politics": 0.0003285408020019531, "__label__religion": 0.00044417381286621094, "__label__science_tech": 0.055694580078125, "__label__social_life": 0.00011223554611206056, "__label__software": 0.007732391357421875, "__label__software_dev": 0.92724609375, "__label__sports_fitness": 0.0003235340118408203, "__label__transportation": 0.0009522438049316406, "__label__travel": 0.0002448558807373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60098, 0.01917]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60098, 0.70841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60098, 0.9241]], "google_gemma-3-12b-it_contains_pii": [[0, 2021, false], [2021, 5407, null], [5407, 8799, null], [8799, 12454, null], [12454, 16127, null], [16127, 18936, null], [18936, 22465, null], [22465, 25023, null], [25023, 28518, null], [28518, 32056, null], [32056, 35781, null], [35781, 39072, null], [39072, 42723, null], [42723, 46301, null], [46301, 49644, null], [49644, 53139, null], [53139, 55949, null], [55949, 59355, null], [59355, 60098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2021, true], [2021, 5407, null], [5407, 8799, null], [8799, 12454, null], [12454, 16127, null], [16127, 18936, null], [18936, 22465, null], [22465, 25023, null], [25023, 28518, null], [28518, 32056, null], [32056, 35781, null], [35781, 39072, null], [39072, 42723, null], [42723, 46301, null], [46301, 49644, null], [49644, 53139, null], [53139, 55949, null], [55949, 59355, null], [59355, 60098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60098, null]], "pdf_page_numbers": [[0, 2021, 1], [2021, 5407, 2], [5407, 8799, 3], [8799, 12454, 4], [12454, 16127, 5], [16127, 18936, 6], [18936, 22465, 7], [22465, 25023, 8], [25023, 28518, 9], [28518, 32056, 10], [32056, 35781, 11], [35781, 39072, 12], [39072, 42723, 13], [42723, 46301, 14], [46301, 49644, 15], [49644, 53139, 16], [53139, 55949, 17], [55949, 59355, 18], [59355, 60098, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60098, 0.04706]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1e409070f6fe404fceabea899e83c0bbc71dc93b
|
[REMOVED]
|
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783319308050-c2.pdf?SGWID=0-0-45-1555478-p179883385", "len_cl100k_base": 10207, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 52590, "total-output-tokens": 14032, "length": "2e13", "weborganizer": {"__label__adult": 0.0003845691680908203, "__label__art_design": 0.0002777576446533203, "__label__crime_law": 0.0006213188171386719, "__label__education_jobs": 0.00029730796813964844, "__label__entertainment": 5.5730342864990234e-05, "__label__fashion_beauty": 0.00014793872833251953, "__label__finance_business": 0.00018906593322753904, "__label__food_dining": 0.00033211708068847656, "__label__games": 0.0005393028259277344, "__label__hardware": 0.0008153915405273438, "__label__health": 0.00045180320739746094, "__label__history": 0.00017333030700683594, "__label__home_hobbies": 8.386373519897461e-05, "__label__industrial": 0.0003767013549804687, "__label__literature": 0.00022852420806884768, "__label__politics": 0.00029730796813964844, "__label__religion": 0.0003819465637207031, "__label__science_tech": 0.016876220703125, "__label__social_life": 7.218122482299805e-05, "__label__software": 0.00560760498046875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0002682209014892578, "__label__transportation": 0.0005078315734863281, "__label__travel": 0.00015473365783691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52434, 0.02947]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52434, 0.27026]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52434, 0.83861]], "google_gemma-3-12b-it_contains_pii": [[0, 2603, false], [2603, 5925, null], [5925, 9220, null], [9220, 12015, null], [12015, 14568, null], [14568, 17383, null], [17383, 20833, null], [20833, 24359, null], [24359, 27599, null], [27599, 30990, null], [30990, 34102, null], [34102, 35936, null], [35936, 38656, null], [38656, 40690, null], [40690, 43902, null], [43902, 46951, null], [46951, 50243, null], [50243, 52434, null], [52434, 52434, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2603, true], [2603, 5925, null], [5925, 9220, null], [9220, 12015, null], [12015, 14568, null], [14568, 17383, null], [17383, 20833, null], [20833, 24359, null], [24359, 27599, null], [27599, 30990, null], [30990, 34102, null], [34102, 35936, null], [35936, 38656, null], [38656, 40690, null], [40690, 43902, null], [43902, 46951, null], [46951, 50243, null], [50243, 52434, null], [52434, 52434, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52434, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52434, null]], "pdf_page_numbers": [[0, 2603, 1], [2603, 5925, 2], [5925, 9220, 3], [9220, 12015, 4], [12015, 14568, 5], [14568, 17383, 6], [17383, 20833, 7], [20833, 24359, 8], [24359, 27599, 9], [27599, 30990, 10], [30990, 34102, 11], [34102, 35936, 12], [35936, 38656, 13], [38656, 40690, 14], [40690, 43902, 15], [43902, 46951, 16], [46951, 50243, 17], [50243, 52434, 18], [52434, 52434, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52434, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
077fa17a67492c3fe992cd2e549afef1a240f005
|
SEMESTER PROJECT
Report
“AODV routing algorithm for multihop Ad Hoc networks”
# Table of Contents
1. **THE PROJECT** 3
2. **INTRODUCTION TO AD-HOC NETWORKS** 3
- 2.1. Overview 3
- 2.2. Characteristics and issues 5
- 2.3. Terminology 5
3. **A WORD ON 802.11 AND IPACKS** 6
4. **ROUTING IN MANET NETWORKS** 7
- 4.1. Proactive/reactive protocols 7
- 4.2. Examples 8
5. **AODV** 9
- 5.1. Overview 9
- 5.2. AODV Terminology 10
- 5.3. Functioning principle 11
- 5.3.1. RReq broadcast 11
- 5.3.2. RReq forward 11
- 5.3.3. RRep generation 12
- 5.3.4. RRep forwarding 12
- 5.3.5. Error detection 12
- 5.3.6. Table maintenance 12
- 5.4. JAVA Implementation 12
- 5.4.1. Message Types 13
- 5.4.2. Route table entries and configuration parameters 15
- 5.4.3. Integration in Framework 16
- 5.4.4. Classes and Methods 17
- 5.5. Testing 19
6. **STILL TO BE DONE** 20
7. **REFERENCES** 20
8. **SOURCE CODE** 20
1. The project
Intitulé du projet:
"Le standard IEEE 802.11 qui décrit le fonctionnement des réseaux sans fil ne considère que des environnements single-hop dans lesquels chaque station n'est capable de communiquer directement qu'avec son entourage. Si on considère des réseaux de plus grande taille sans infrastructure câblée, il est nécessaire d'envisager un algorithme qui permette de propager un message en passant par plusieurs stations relais (sortes de routeur). Pour résoudre ce problème, il existe déjà plusieurs algorithmes de routage bien adaptés au contexte des réseaux ad hoc : AODV (Ad hoc On-demand Distance Vector).
Le premier but de ce projet consiste à porter une implémentation JAVA ou C de l'algorithme MAODV Multicast Ad hoc On-demand Distance Vector), dérivé de AODV qui permet de faire du multicast. Dans un deuxième temps, une simple application illustrant son fonctionnement sera codée et déployée sur les iPaq du réseau mobile du LSR."
Translation:
The IEEE 802.11 standard which describes wireless functioning takes only in account single-hop environments in which each station is able to communicate directly only with its neighbors. If we consider bigger networks without wired infrastructure, it is necessary to consider an algorithm which permits the propagation of a message through several relay stations (kind of routers). In order to solve this problem, several routing algorithms adapted to Ad Hoc networks already exist: AODV (Ad hoc On-demand Distance Vector).
The first goal of this project consists in porting a JAVA or C implementation of the MAODV algorithm (Multicast Ad hoc On-demand Distance Vector), derived from AODV which permits multicast. In a secondary part, a simple application illustrating its use will be coded on iPaqs from the LSR’s mobile network.
2. Introduction to Ad-Hoc Networks
2.1. Overview
Ad-Hoc networks are the new deal in wireless communication. Unlike traditional networks, no infrastructure is actually needed. All “nodes” (i.e. communicating systems) are equal in role, as no client or server exists. In addition to traditional wireless networks problems such as, bandwidth optimization, power control and transmission quality enhancement, Ad-Hoc networks brings his load of new challenges. The actual lack of infrastructure requires the introduction of new tasks like discovery and maintenance, addressing (no server to give an address to each node!) or routing.
From wired networks to AdHoc...
**LAN or Internet**
All infrastructure
i.
**WLAN or Mobile Network**
Base Stations => Necessary infrastructure
ii.
**AdHoc Networks**
*No infrastructure*
*All nodes mobile*
iii.
2.2. Characteristics and issues
Nodes are equal: all nodes can communicate with each other, with the same priority, regardless of the position. This means that if two nodes are out of reach and want to communicate, they have first to find each other over the network. But as no central control exists (server, router), Ad-Hoc cannot rely on IP-like address which would uniquely identify a node. So the first challenge is to provide some kind of identifier of each node, for it to know how to “call” one another. Secondly, as all should be reachable, distant (not in the same cell) nodes should communicate through a path of other nodes, acting as routers. This is another subject of research: routing with nodes as routers!
Frequent changes in topology: as all nodes are mobile, no topology of the network can be guaranteed. The challenge here will be to try to predict topology changes, in order to ascertain permanent connectivity.
Wireless has lower capacity than wired: this is a technical issue, for now, wireless links are able to transfer less data than wired links.
Security is limited: as no physical connection is needed, anyone can connect to any wireless network. One should be aware that wireless communication will continue to be less secure than wired.
Higher loss rates and delays: another technical issue is the link itself: air. Electromagnetic waves encounter many interferences coming from all environment and causing higher loss rates and delays.
Rely on battery: mobility requires the system to be transportable and therefore to function with a battery. This becomes an issue if it wears out, by breaking the connectivity of surrounding neighbors.
Power limited: each node even if equal in role can be different, and more specifically have different power (CPU, memory …). This adds complexity in determining the speed of the transfer.
Limited storage space: few data can be stored at a time.
2.3. Terminology
Node: any system (Portable computer, pocket computer, or even cellular phone) that is part of the wireless network.
Cell: abstract field of reception of a particular node. A node can send or receive only to nodes within the cell.
Packet: data unit of message. A message (data) is sent in several packets.
Source node: the node that is willing to send a message.
Destination node: the intended target of the message.
Forwarding node: all nodes between source and destination nodes. They are expected to forward messages from source to destination.
Hop: path from one node to another. A route contains one more hop than forwarding nodes.
Broadcast: the only to transfer data in wireless network; the signal can be received from anywhere in the emission area.
Unicast: opposed to broadcast; a message is sent to one destination only.
Flooding: the simplest routing algorithm. The data is just sent to everybody and forwarded, until the destination is attained. Potentially all the network can receive the data, resulting in saturation (congestion).
Congestion: when too much data is sent on a network link or zone, errors due to collisions and physical limits may appear, resulting in data loss or delays.
3. A word on 802.11 and iPacks
802.11 are the IEEE Official standards for wireless communication:
- IEEE 802.11-1997:
Wireless LAN medium access control and physical layer specifications
- IEEE 802.11a-1999:
High-speed physical layer
- IEEE 802.11b-1999:
Higher-speed physical layer extension
- IEEE 802.11d-2001:
Specification for operation in additional regulatory domains
802.11 works with two different operating modes with different architectures:
- Infrastructure mode: Cooperative and structured (WLAN).
- Independent (ad Hoc) mode: Concurrent and distributed.
The iPacks are Compaq’s solution of pocket PCs. It can receive a 802.11 network card through a PCMCIA port.
Operating system
Windows® Powered Pocket PC
Linux
Processor
206 MHz Intel StrongARM 32-bit RISC
Memory
32 MB RAM memory
16 MB ROM memory
Development environment
Visual C++
Java
Porting JAVA applications on these machines requires caution as the version supported is only v1.6.1. This is not a final statement, considering the evolution of such equipment.
4. Routing in MANET networks
The goal is to find stable routes (despite mobility) to rely on for packets dissemination. Furthermore the route should be—if not optimized—short. One problem in MANET Networks is, as discussed before, the lack of Identifier for each node (IP-like address). Supposing that this problem is solved by for instance, a number reflecting a unique serial number of the system, we will now concentrate on how to send a message to an intended target.
4.1. Proactive/reactive protocols
Routing protocols are divided in two categories: proactive and reactive protocols. Some attempts have been made to develop adaptive/hybrid protocols able to work well on all environments.
Proactive protocols:
- Always maintain routes
- Little or no delay for route determination
- Consume bandwidth to keep routes up-to-date
- Maintain routes which may never be used
Reactive protocols:
- Lower overhead since routes are determined on demand
Significant delay in route determination
- Employ flooding (global search)
- Traffic control may be congestive
Hybrid protocols (combination of proactive and reactive) are also in research.
### 4.2. Examples
**DSDV (Destination Sequenced Distance Vector: proactive):** Each node maintains a routing table which stores the next hop, cost metric towards each destination and a sequence number that is created by the destination itself. Each node periodically forwards routing table to neighbors. Each node increments and appends its sequence number when sending its local routing table. Each route is tagged with a sequence number; routes with greater sequence numbers are preferred. Each node advertises a monotonically increasing even sequence number for itself. When a node decides that a route is broken, it increments the sequence number of the route and advertises it with infinite metric.
Destination advertises new sequence number
**DSR (Dynamic Source Routing: reactive):** The idea is that when a source node wants to send a packet to a destination, but does not know a route to it, the source initiates a route discovery.
By sending a route request (RReq), the source node floods the network, in order to let all nodes know who (what destination node) it is looking for. The RReq stores the route taken by appending the identifier of each forwarding nodes and propagates in the network until the intended target (destination) receives it.
This destination node sends then a route reply (RRep), using the reversed route taken by the RReq. The RRep includes the complete route from source to destination. On reception of the RRep, the source can finally send its packet after including the entire route in the packet header. Intermediate nodes use the source route included in the packet to determine to which node the packet should be forwarded.
Source S broadcasts a RReq in the network which only forwarded only one time per node.
Intended target D sends the route reply.
The source S sends the data packet with the route included in the header.
Other names like “Optimized Link State Routing” (OLSR: proactive) or “Zone Routing Protocol” (ZRP: Hybrid), also exist.
**AODV (Ad hoc On-demand Distance Vector: reactive):** improvement of DSR, see next chapter.
### 5. AODV
As seen before, in DSR sources includes routes in packet header. Depending on the length of the route, this header can become heavy (even if the content of the packet is small) and degrade performances. AODV solves this problem by improving DSR in that it does not need to include entire route in each packet.
#### 5.1. Overview
AODV stands for AdHoc On-Demand Distance Vector and is therefore (on-demand) a reactive protocol. It could actually be seen as a hybrid protocol as it has also some proactive characteristics (route table at each node).
The first change from DSR to AODV is the introduction of routing tables at the nodes, so that packets do not have to contain entire route in their header. But AODV retains the desirable feature of DSR, that routes are maintained only between nodes which need to communicate (active routes).
Route Requests are forwarded in same way of DSR.
When a node re-broadcasts (forwards) a Route Request, it sets up a reverse path pointing towards the source of the RReq.
The intended destination replies by sending a Route Reply (RRep) which is unicast to next hop in the newly built reverse route, to reach the originator of the RReq.
On reception of each control message (RReq, RRep…) a node can update its routing table in order to take in account evolution of the network (i.e. topology changes, obsolete routes…).
5.2. AODV Terminology
(From the IETF’s “manet-aodv-12” draft)
active route:
A route towards a destination that has a routing table entry that is marked as valid. Only active routes can be used to forward data packets.
broadcast:
Broadcasting means transmitting to the IP Limited Broadcast address, 255.255.255.255. A broadcast packet may not be blindly forwarded, but broadcasting is useful to enable dissemination of AODV messages throughout the ad hoc network.
destination:
An IP address to which data packets are to be transmitted. Same as "destination node". A node knows it is the destination node for a data packet when its address appears in the appropriate field of the IP header. Routes for destination nodes are supplied by action of the AODV protocol, which carries the IP address of the destination node in route discovery messages.
forwarding node:
A node that agrees to forward packets destined for another node, by retransmitting them to a next hop that is closer to the unicast destination along a path that has been set up using routing control messages.
forward route:
A route set up to send data packets from a node originating a Route Discovery operation towards its desired destination.
invalid route:
A route that has expired, denoted by a state of invalid in the routing table. An invalid route is used to store the previously valid route information for an extended period of time. An invalid route may not be used to forward data packets.
originating node:
A node that initiates an AODV message to be processed and possibly retransmitted by other nodes in the ad hoc network. For instance, the node initiating a Route Discovery process and broadcasting the RREQ message is called the originating node of the RREQ message.
reverse route:
A route set up to forward a reply (RREP) packet back to the originator from the destination or from an intermediate node having a route to the destination.
sequence number:
An increasing number maintained by each originating node. When used in control messages it is used by other nodes to determine the freshness of the information contained from the originating node.
valid route:
See active route.
5.3. Functioning principle
AODV uses 5 different types of control messages: Route Requests (RReq), Route Reply (RRep), Route Reply Acknowledgement (RRepAck), Route Error (RErr), and Hello (Hello) messages.
We can highlight some steps in the algorithm, basically for each type of message sent:
5.3.1. RReq broadcast
Before sending a message to a destination, the originating node consults its routing table to look if a next Hop (next forwarding node in route) exists. If none is found or route is obsolete, the node need to initiate a route discovery.
So, like DSR, the originating node broadcasts a RReq in the network. Each RReq contains information on the originating and destination node (ID and Sequence number). It is also supposed to count the hops from source to destination.
5.3.2. RReq forward
All node receiving a RReq forwards it but only one time (as it will receive several time the same RReq from its neighbors). At this time the route to the previous node is also created updated. If a node is the destination itself, or has an active route to destination, it responds by a RRep.
5.3.3. **RRrep generation**
RRep can be generated either from a destination node, or a forwarding node. In both cases, they naturally don’t forward the RReq, but the information included in RRep is slightly different. RRep are *unicasted* to previous node (the one from which the RReq comes).

5.3.4. **RRrep forwarding**
Each node on route receives a RRep that it needs to forward to next hop found by consulting its table for the originating node. The RRreps are *unicasted* so that only nodes on route are supposed to forward them to the originating node. Naturally, these forwarding nodes also update their table with the information contained in the RRep, for the forward route to destination.
5.3.5. **Error detection**
A RErr message is iteratively unicasted to all *precursors* (list of forwarding nodes in a route) stored in node’s routing table. A node initiates a RErr message if either:
- it detects a link break for the next hop of an active route in its routing table while transmitting data, or
- it gets a data packet destined to a node for which it does not have an active route, or
- it receives a Rerr from a neighbor for one or more active routes.
5.3.6. **Table maintenance**
A route is only updated if the new sequence number coming from the AODV control message is either:
- higher than the destination sequence number in the route table, or
- the sequence numbers are equal, but the hop count (of the new information) plus one, is smaller than the existing hop count in the routing table, or
- the sequence number is unknown.
5.4. **JAVA Implementation**
Note: I tried as much as I could to obey the IETF draft for manet-aodv 12th version[1], but integration cannot be exactly the same as described for technical reasons (mainly framework integration). I had a C implementation from Uppsala University, but considering the fact that I had to integrate my work in a framework being built by another group, I decided it was simpler to start right from the scratch. This way I also avoided the problem of translation in a language that might not have all the functionality of C.
5.4.1. Message Types
Each message type becomes a JAVA object with fields described as below. One difference with the draft is that message types are on 16bits (char) instead of 8. This is to be compatible with the rest of the framework that includes a configuration file for this kind of constants. But I still included the type as a field for the message objects. Another one is the length of the ID (IP in the draft) which is 64bits instead of 32, for the same reason of compatibility.
Fields of each message type: (Different from draft)
**Route Request message (RReq):**
```
| Type | Flags | Hop Count |
| +---+---+---------|
| | | |
| RREQ ID |
| +---+---+---------|
| Destination ID Address |
| +---+---+---------|
| Dest_ID |
| +---+---+---------|
| Dest_SeqNum |
| +---+---+---------|
| Orig_ID |
| +---+---+---------|
| Orig_SeqNum |
```
*Type* : type of message (set to 1 in the draft)
*Flags*: 5 RReq parameters: J (join flag), R (repair flag) both reserved for multicast, G(Gratuitous RRep flag): indicates whether a gratuitous RRep should be unicast to the node specified in the Destination ID field.
D (Destination only flat): indicates only the destination may respond to this RReq.
U (Unknown sequence number)
*Hop_Count*: increasing value to count the hops from source to destination
*RReq_ID*: A sequence number uniquely identifying the particular RReq when taken in conjunction with the originating node’s ID.
*Dest_ID*: The Identifier (address) of the destination for which a route is desired.
*Dest_SeqNum*: The greatest sequence number received in the past by the originator for any route towards the destination.
*Orig_ID*: The Identifier of the node which originated the Route Request.
*Orig_SeqNum*: The current sequence number to be used for route entries pointing to (and generated by) the originator of the route request.
Route Reply message (RRep):
<table>
<thead>
<tr>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Prefix size</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Type</td>
<td>Flags</td>
<td>Hop Count</td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RREQ ID</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Destination ID Address</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Destination ID Address</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Destination Sequence Number</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Originator ID Address</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Originator ID Address</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Lifetime</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Type:** type of message (set to 2 in the draft)
**Flags:** R(Repair flag), A(Acknowledgement field)
**Prefix Size:** If nonzero, the 5-bit Prefix Size specified that the indicated next hop may be used for any nodes with the same routing prefix (as defined by the Prefix Size) as the requested destination.
**Hop Count:** The number of hops from the Originator ID address to the Destination ID address. For multicast route requests this indicates the number of hop to the multicast tree member sending the RRep.
**Dest_ID:** Identifier (address) of the destination for which a route is supplied.
**Dest_SeqNum:** The destination sequence number associated to the route.
**Orig_ID:** Identifier of the node which originated the RReq for which the route is supplied.
**Lifetime:** The time in milliseconds for which nodes receiving the RRep consider the route to be valid.
Route Reply Acknowledgment message (RRepAck):
<table>
<thead>
<tr>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Type</td>
<td></td>
</tr>
<tr>
<td>+------------------------------------------+</td>
<td></td>
</tr>
</tbody>
</table>
**Type:** type of message (set to 4 in the draft)
**Route Error message (RErr):**
```
<p>| 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 |
|-------------------------|-------------------|-------------------|</p>
<table>
<thead>
<tr>
<th>Type</th>
<th>Flags</th>
<th>DestCount</th>
</tr>
</thead>
<tbody>
<tr>
<td>RREQ ID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Unreachable Destination ID Address</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Unreachable Destination ID Address</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Unreachable Destination Sequence Number</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Additional Unreachable Destination ID Address</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Additional Unreachable Destination ID Address</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Originator Sequence Number</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```
**Type:** type of message (set to 3 in the draft)
**N_flag:** No delete flag; set when a node has performed a local repair of a link, and upstream nodes should not delete the route.
**Unreachaclude_Dest_SeqNum:** The sequence number in the route table entry for the destination listed in the previous Unreachable Destination ID address field.
**Dest_ID:** The ID address of the destination that has become unreachable due to a link brake.
**Dest_Count:** The number of unreachable destinations included in the message; MUST be at least 1.
Dest_ID and Unreachaclude_Dest_SeqNum are actually arrays containing the addresses and Sequence Numbers.
### 5.4.2 Route table entries and configuration parameters
Route entries are specified by the AODV draft, but not all of them are used in this implementation (though present) they are:
- **Destination ID**
- **Destination Sequence Number**
- **Valid Destination Sequence Number**
- **Interface:** not used here
- **Hop Count:** number of hops needed to reach destination
- **Next Hop:** next node ID to which message should be sent
- **List of Precursors:** list of forwarding nodes in the route
- **Lifetime:** expiration or deletion time of the route
- **Routing Flags:** not used here
- **State:** not used here
*Parameters:* (in configuration file with default value), again all of them are present in this implementation, but not all used. They are rather explicit.
### AODV Parameters (as specified in draft):
<table>
<thead>
<tr>
<th>Parameter Name</th>
<th>Default Value (update formula)</th>
</tr>
</thead>
<tbody>
<tr>
<td>ACTIVE_ROUTE_TIMEOUT</td>
<td>3,000 Milliseconds</td>
</tr>
<tr>
<td>ALLOWED_HELLO_LOSS</td>
<td>2</td>
</tr>
<tr>
<td>BLACKLIST_TIMEOUT</td>
<td>RREQ_RETRIES * NET_TRAVERSAL_TIME</td>
</tr>
<tr>
<td>DELETE_PERIOD</td>
<td></td>
</tr>
<tr>
<td>HELLO_INTERVAL</td>
<td>1,000 Milliseconds</td>
</tr>
<tr>
<td>LOCAL_ADD_TTL</td>
<td>2</td>
</tr>
<tr>
<td>MAX_REPAIR_TTL</td>
<td>0.3 * NET_DIAMETER</td>
</tr>
<tr>
<td>MIN_REPAIR_TTL</td>
<td></td>
</tr>
<tr>
<td>MY_ROUTE_TIMEOUT</td>
<td>2 * ACTIVE_ROUTE_TIMEOUT</td>
</tr>
<tr>
<td>NET_DIAMETER</td>
<td>35</td>
</tr>
<tr>
<td>NET_TRAVERSAL_TIME</td>
<td>2 * NODE_TRAVERSAL_TIME * NET_DIAMETER</td>
</tr>
<tr>
<td>NEXT_HOP_WAIT</td>
<td>NODE_TRAVERSAL_TIME + 10</td>
</tr>
<tr>
<td>NODE_TRAVERSAL_TIME</td>
<td>40</td>
</tr>
<tr>
<td>PATH_DISCOVERY_TIME</td>
<td>2 * NET_TRAVERSAL_TIME</td>
</tr>
<tr>
<td>RERR_RATELIMIT</td>
<td>10</td>
</tr>
<tr>
<td>RING_TRAVERSAL_TIME</td>
<td>2 * NODE_TRAVERSAL_TIME * (TTL_VALUE + TIMEOUT_BUFFER)</td>
</tr>
<tr>
<td>RREQ_RETRIES</td>
<td>2</td>
</tr>
<tr>
<td>RREQ_RATELIMIT</td>
<td>10</td>
</tr>
<tr>
<td>TIMEOUT_BUFFER</td>
<td>2</td>
</tr>
<tr>
<td>TTL_START</td>
<td>1</td>
</tr>
<tr>
<td>TTL_INCREMENT</td>
<td>2</td>
</tr>
<tr>
<td>TTL_THRESHOLD</td>
<td>7</td>
</tr>
<tr>
<td>TTL_VALUE</td>
<td></td>
</tr>
</tbody>
</table>
#### 5.4.3. Integration in Framework
The difficulty of this project resided in its integration in a changing framework. It was necessary to have a cooperation with the framework implementers in order that they provide enough tools for an algorithm like AODV to be implemented. However, working on integrating something in a changing framework delays the implementation.
The framework includes a “message pool” to avoid building too many objects which is costly in JAVA. To use it, one has to first get access to this message pool, and to implement a message factory for each message object. The message factory actually tells the pool, how to construct the message. Then to build a message, we ask the message pool to return a message object of a specified type. And finally the message can be set with parameters, and at last sent.
The framework is composed of layers (Asynchronous Layers) arranged in a stack as a network application requires. Each layer has a thread and a buffer in which messages are stored to be consumed by upper layer. A layer notifies the upper layer if a message is put in its buffer, and is awaken by the lower layer on reception of a message (which is buffered in the lower layer).
AODV is a complete layer (routing) and takes its place above the dispatcher.
See figure next page.
The dispatcher is a layer dedicated to deliver message to the proper layer or module.
The modules are side programs (not part of the layer stack), which offer a special feature usable by several layers. Already implemented is the Hello Module.
Application layer is situated on top of the stack. The implemented application is a chat working in multihop.
Virtual Networks is situated above the communication layer (Asynchronous Multicast) and its purpose is to create virtual networks so that only nodes on the same network can directly communicate with each other. It was implemented so that multihop communication could be tested.
5.4.4. Classes and Methods
AODV package contains 11 classes: 4 for the message objects and their corresponding message factories, one for the route table, one for its entries, and one for the algorithm. There are only four messages because Hello message were implemented in Module on side of the layer stack.
Methods description (in class “AODV”):
Apart from the methods for package integration in framework (Constructor, Initialize, Send, sendMessage, handleMessage, Startup), the methods are in concordance with AODV steps.
Constructor: empty, but necessary for building the stack.
Initialize: all the initializations needed, i.e. access to the pool, reading of the configuration file (manet.config) for specific parameters…
sendMessage: overrides super-class sendMessage method in order to handle message coming from the above layer. Actually calls this class’s handleMessage.
Send: called when sending a message from the class, this method sends the actual message of control and throws an error message stating the nature of the message for which the error occurred.
Startup: start the thread inherited from Asynchronous Layer. It will sleep after sending a message, and will be awakened by the lower layer upon message arrival.
handleMessage: takes action depending on the message received.
If (AODV message type) process message with proper process<Msg> method
Else if (destination is thisNode) put message in buffer
Else if (destination is 0) put message in buffer and broadcast again if (TTL>1)
Else if (route for destination known) send to next forwarding node
Else generate a route discovery for destination (send an RReq).
RReqGen: Asks a route for a message to reach a destination. The RReq is buffered (to avoid receiving duplicates from neighbors) and then broadcasted. RReq sending rate is limited. Repeat RReq attempts for a route discovery of a single destination is not yet implemented. It must use a binary exponential back off (to set waiting time before resending an RReq). An expanding ring search technique should also be used to prevent unnecessary flooding of RReq.
RReqProcess: takes action upon RReq arrival. Creates or updates a route to previous hop and to Originator. If node is destination, generates a Route Reply. If not, forward RReq.
RReqFwd: Forwards (broadcast) Route Request if node is not destination and RReq’s TTL is greater than 1.
RRepGen: Route Reply generation from destination. The destination sends a route reply to the previous node.
RRepGenInter: Route Reply generation from intermediate node knowing a route to destination. The node update updates its route table entry for the originating node by placing next hop towards destination and last hop node in the precursor lists (for respectively forward and reverse route).
RRepGratuitousGen: sends a gratuitous RRep to the Originator (if the gratuitous Flag is set in the RReq). It builds a route from destination to Originator, just as if the destination had issued a fictitious RReq for the originating node.
RRepProcess: takes action upon RRep arrival.
Create or updates a route to the previous hop.
Create or update forward route.
If (node is not originator) forward RReq
Else send message from queue for which node knows a route.
**RRepFwd**: forwards Route Reply if node is not the Originator (of the RReq), and a forward route has been created or updated. The node sends a new route reply to next node, which can be found by consulting the node’s route table for the originator¹ of the RReq.
**displayError**: displays debugging message (with 3 levels of debug)
**updateRreqBuff**: scans for node’s sent RReq messages and remove obsolete values.
**scanTable**: scans the node’s routing table for a specified destination and returns its index, or “-1” if not found.
**updateTable(entry)**: updates the routing table, by replacing the specified entry by a new one.
**updateTable()**: scans the routing table to remove timed out routes.
### 5.5. Testing
Included with the framework is a chat, and with the use of virtual network, it is possible to experience multihop routing. Virtual networks is a layer that allows to emulate networks with nodes that might not all see each other. For example, one can create two networks, with nodes belonging to one or the other (or both). The interest is that the node(s) belonging to both networks will have to act as forwarding nodes, for the others to communicate from a network to another. This is easier than to position nodes over a field to test multihop routing!

The chat is a simple application where people can send message to others by choosing a node’s name from a list. It is necessary to do a broadcast at the beginning so that the lists are filled with node names (no possibility of entering a node’s ID yet).
It shows 3 windows, one for the main chat, and one for each module (Neighbor, Statistics):
---
¹ The node for which the route request is supplied
Module windows:
<table>
<thead>
<tr>
<th>Neighbor Table:</th>
</tr>
</thead>
<tbody>
<tr>
<td>104655309263792516127963146210465531146228858</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Statistics:</th>
</tr>
</thead>
</table>
6. Still to be done
- **Generate and handle RReq’s**: in order to detect the departure of a forwarding node, we have to implement RReq message generation. This can be done with using Reto Krumenacher’s neighboring module which sends periodically Hello messages. This way, local repair can also be implemented.
- **Implement RReq retry**, using exponential backoff (to determine the waiting time for a RRep) and ring search technique (to determine TTL of RReq’s) in order to not saturate the network. This would require a thread that would wait for a RRep, while the layer’s thread remains dormant.
- **Add RRepAck feature** in configuration file (for unreliable or unidirectional links). This is to be certain RRep are not lost over the route discovery (as they are not resent).
7. References
Charles E. Perkins, Elizabeth M. Belding-Royer, Samir R. Das
- [2] DSR and AODV:
Mobile Ad Hoc Networks
Jim Thompson, Musenki
http://nycwireless.net/presentation/jt_adhoc_tutorial.pdf
- [3] Mobile Ad Hoc Networks:
http://lcawww.epfl.ch/Publications/Giordano/Giordano01a.pdf
Sylvia Giordano
- [4] 802.11 and iPacks:
General overview of IEEE 802.11
David Cavin
http://lsrwww.epfl.ch/cavin/work/manet/presentation.pdf
8. Source code
In annex.
|
{"Source-Url": "https://infoscience.epfl.ch/record/49916/files/Gra03.pdf", "len_cl100k_base": 8375, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 38657, "total-output-tokens": 8848, "length": "2e13", "weborganizer": {"__label__adult": 0.00044083595275878906, "__label__art_design": 0.00029921531677246094, "__label__crime_law": 0.00041556358337402344, "__label__education_jobs": 0.0009050369262695312, "__label__entertainment": 0.00011658668518066406, "__label__fashion_beauty": 0.000194549560546875, "__label__finance_business": 0.0001894235610961914, "__label__food_dining": 0.0004417896270751953, "__label__games": 0.0010509490966796875, "__label__hardware": 0.00568389892578125, "__label__health": 0.0006127357482910156, "__label__history": 0.0004734992980957031, "__label__home_hobbies": 0.00013947486877441406, "__label__industrial": 0.0006895065307617188, "__label__literature": 0.00024068355560302737, "__label__politics": 0.0002570152282714844, "__label__religion": 0.0005707740783691406, "__label__science_tech": 0.128662109375, "__label__social_life": 0.00011414289474487303, "__label__software": 0.0137481689453125, "__label__software_dev": 0.84228515625, "__label__sports_fitness": 0.000762939453125, "__label__transportation": 0.0015554428100585938, "__label__travel": 0.00039267539978027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35052, 0.03216]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35052, 0.39988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35052, 0.84897]], "google_gemma-3-12b-it_contains_pii": [[0, 79, false], [79, 983, null], [983, 3428, null], [3428, 3648, null], [3648, 6010, null], [6010, 7386, null], [7386, 8808, null], [8808, 10757, null], [10757, 12466, null], [12466, 14640, null], [14640, 15743, null], [15743, 17870, null], [17870, 19750, null], [19750, 21543, null], [21543, 23796, null], [23796, 27753, null], [27753, 29273, null], [29273, 31664, null], [31664, 33381, null], [33381, 35052, null]], "google_gemma-3-12b-it_is_public_document": [[0, 79, true], [79, 983, null], [983, 3428, null], [3428, 3648, null], [3648, 6010, null], [6010, 7386, null], [7386, 8808, null], [8808, 10757, null], [10757, 12466, null], [12466, 14640, null], [14640, 15743, null], [15743, 17870, null], [17870, 19750, null], [19750, 21543, null], [21543, 23796, null], [23796, 27753, null], [27753, 29273, null], [29273, 31664, null], [31664, 33381, null], [33381, 35052, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35052, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35052, null]], "pdf_page_numbers": [[0, 79, 1], [79, 983, 2], [983, 3428, 3], [3428, 3648, 4], [3648, 6010, 5], [6010, 7386, 6], [7386, 8808, 7], [8808, 10757, 8], [10757, 12466, 9], [12466, 14640, 10], [14640, 15743, 11], [15743, 17870, 12], [17870, 19750, 13], [19750, 21543, 14], [21543, 23796, 15], [23796, 27753, 16], [27753, 29273, 17], [29273, 31664, 18], [31664, 33381, 19], [33381, 35052, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35052, 0.21094]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
719a3459de80f0efd92e5f43df6b358871a233fe
|
If you have any difficulty with this product, please write to:
Mandarin Software,
Europa House, Adlington Park,
Adlington, Macclesfield SK10 4NP
No material may be reproduced in whole or in part without written permission. While every care has been taken, the publishers cannot be held legally responsible for any errors or omissions in the manual or the software.
# Contents
## Introduction
- Compilers versus Interpreters
## 1: Installation
- Backing up the compiler disc
- Installing STOS v2.4 onto a floppy disc system
- Installing STOS v2.4 on a hard disc
- Auto-loading the compiler
- Using the compiler on a 512k ST with one floppy drive
- On a 512k ST with two floppy drives
- With a ramdisc
- On a hard disc
## 2: The Compiler accessory
## 3: Compiler tutorial
## 4: Troubleshooting
## 5: Extension commands
- COMTEST ON / OFF / ALWAYS
## 6: The new floating point routines
- Converting old format files
## 7: Technical details
- Garbage collection
- How the compiler works
## 8: Utility accessories
- Ramdisc
- Format
---
**Foreword** by François Lionet
I would like to take this opportunity to thank you for your continued support of the STOS Basic package. I do hope you like this compiler, as I put all my programming knowledge and four months of my life into it. Rest assured that Jawx, Mandarin and I will do the maximum possible to help you with any questions or difficulties. We’ll also keep on developing the STOS range to increase its power still further. So do have fun writing games, compiling them, and hopefully even selling them. Finally, please go on supporting us, and don’t give this compiler to other people. MERCI BEAUCOUP, as we say in France. Try to imagine how you would feel, if someone were to steal your latest STOS masterpiece, especially if programming was your only source of income. And remember, software piracy isn’t just a game, it is a genuine threat to the entire software industry.
I’ve only one other thing to say: Try compiling the compiler! Happy compiling.
---
François Lionet
Introduction
The original STOS Basic package set a new standard for Atari ST software. Now, hot on the heels of its phenomenal success, comes this amazing utility which can transform any existing STOS Basic program into incredibly fast machine code. The new STOS compiler gives you all the speed of a language like C, with the ease of use you have come to expect from STOS Basic. Unlike the compilers for other Basics, the STOS compiler is completely interactive. So you can compile, run and test your programs directly from STOS Basic. You can also create standalone programs which can be executed straight from the Gem desktop. These programs can be freely distributed without any copyright restrictions.
It's important to note that the STOS Compiler has been designed especially with 520ST users in mind. This means you can compile full-sized STOS Basic programs on an unexpanded machine with absolutely no disc swapping! Even on the smallest system, you will easily be able to compile all the programs from the STOS Games disc.
This package is delightfully easy to use: All the features are controlled directly from the mouse, using a simple accessory program. Compared to the complexities of programs like the Sprite editor, or the Character generator, the compiler is remarkably uncomplicated. But don't be deceived — the STOS compiler is a very sophisticated program indeed.
Compilers versus Interpreters
But what exactly is a compiler? As you may know, the ST's 68000 processor is only capable of understanding an internal language known as machine code. This means that you cannot simply enter Basic commands straight into the ST without first translating them into machine-code using STOS Basic.
Supposing you were presented with some valuable text written in an unfamiliar language (say French). There are two possible ways this could be translated into English. One idea might be to take each word in the text, and look it up separately in a French-English dictionary.
This approach is known as interpretation, and is used by the standard STOS Basic. Whenever a program is run, each instruction is checked against a dictionary stored somewhere in the ST's memory, and the appropriate machine code routine is subsequently executed.
Another solution to the above problem might be to hand the text over to a professional translator. This person could now rewrite the entire document in English, which you could then read directly. The same idea can also be applied to a Basic program. In this case, the translator corresponds to a separate program known as a compiler. The STOS Basic compiler converts the complete STOS Basic program into machine-code, producing a new version which can be executed immediately without the need for further translation.
The main advantages of a compiled program over an interpreted one can be summarised like this:
1 Compiled programs execute up to three times faster than the equivalent interpreted program.
2 Compiled programs can be run directly from the Gem Desktop, or from an AUTO folder. They do not need any of the files from the STOS folder in order to run.
Once a program has been compiled, it is impossible to translate it back into STOS Basic. So there is no chance of anyone stealing your original code.
Against this, there is one single disadvantage.
Compiled programs are larger than interpreted programs.
This statement is actually slightly misleading because the real size of an interpreted program is far larger than you would initially expect. Take the game Orbit, for instance. Although this is apparently only 60k, if you were to create a run only version you would need to include all the separate program modules contained in the STOS folder. The total size of the program would therefore be a surprising 270k.
Contrast this with a compiled version of the same program. The final length of Orbit after completion is around 130k, which is over 140k less than the interpreted version! So although a compiled program may look larger than an interpreted one, it’s often considerably smaller.
1: Installation
Before you can use your STOS Basic compiler for the first time, you will need to configure it for your ST. Although the configuration process may seem a little complicated, it can normally be completed in under ten minutes, and only needs to be done once. It’s important to emphasise that the compiler itself is incredibly easy to use. So if you have already mastered the intricacies of the STOS Sprite editor, this package will hold no terrors for you!
You should begin by making a backup of the system on a fresh disc. Once you’ve created this disc, you should use it for all subsequent compilation. You can now hide the original disc somewhere safe, secure in the knowledge that you can make another copy of the compiler if the backup gets corrupted.
Backing up the Compiler disc
1. Slide the write protect tab of the original disc so that you can see through the hole. This will guard your disc against possible mistakes during copying.
2. Place a blank disc into drive A and format it in the normal way.
3. Now put the compiler disc into drive A and drag the icon over drive B.
4. Follow the prompts displayed in the Gem dialogue boxes.
Note that this package was only designed to run under STOS Basic version 2.4 or higher. You don’t need to panic if you have an earlier version, as an upgrade to 2.4 is included free with the compiler.
The main improvements incorporated into version 2.4 are:
- Better support for extension files.
- The Floating point arithmetic now uses single precision, and is much faster.
A few minor bugs have been fixed. Ninety per cent of existing STOS Basic programs will be totally compatible with the new version. But any programs which use floating arithmetic will need to be converted to STOS 2.4 using the CONVERT.BAS program supplied with this disc. See Chapter 6 for further details.
**Installing STOS V2.4 onto a floppy disc system**
1. Boot up STOS Basic as normal.
2. Place the compiler disc into drive A.
3. Enter the line:
```
run "stosv204.bas"
```
4. Select the current drive using the A and B keys, and hit G to load the new STOS files into the ST's memory. You can now place a disc containing STOS Basic into the appropriate drive. We recommend that you update ALL your copies of STOS to version 2.4 as this will avoid any potential mixups with the compiler. But don't update your original copy of the STOS language disc until AFTER you have successfully copied one of your backups, and tested it carefully. Otherwise a single corrupted file on the compiler disc could accidentally destroy all your copies of STOS in one fell swoop!
5. Repeat step 4 for all your working copies of STOS Basic.
**Installing STOS V2.4 on a hard disc**
1. Create a floppy version of STOS V2.4 using the above procedure.
2. Copy the STOS folder onto the hard disc along with the BASIC204.PRG file.
**Auto-loading the compiler**
The action of the compiler is controlled through the accessory program COMPILER.ACB. This can be loaded automatically by changing the EDITOR.ENV file on the STOS boot disc.
Insert the original STOS language disc into drive A and run the configuration program CONFIG by typing:
```
run "config.bas"
```
When the main menu appears, click on the NEXT PAGE icon. You can now add COMPILER.ACB to the list of accessories which will be loaded on start-up. Click the mouse on the first free space of the accessory list and enter the line:
```
compiler.acb
```
It's a good idea to add the following function key definitions to the standard list. This will simplify the process of loading and saving compiled programs considerably.
F14(shift-F4) load"*.CMP"
F15(shift-F5) tsave"*.CMP"
You can now save the new configuration file onto your working copy of STOS Basic using the "SAVE ON DISC" command. Then copy the COMPILER.ACB file onto the language disc so that STOS Basic can pick it up off the root directory as it boots up.
**Using the compiler**
The compiler accessory can be executed at any time directly from the <HELP> menu. In order for the compiler to work, the files contained in the COMPILER folder should always be available from the current drive. This reduces the amount of memory used by the compiler to a mere 25k, and allows you to compile acceptably large STOS Basic programs on a 520 ST.
The optimum strategy for using this package varies depending on your precise system configuration. Here is a full explanation of how the package can be set up for use with most common ST systems.
**Using the compiler on a 512k ST with one floppy drive**
If you are intending to compile large programs on an unexpanded machine, it's wise to keep a copy of the COMPILER folder on every disc you will be using for your programs. This will allow you to compile large programs directly onto the disc, without the risk of running out of memory. A special program called COMPCOPY is supplied for this purpose, which automatically copies the appropriate files onto your discs. Insert the compiler disc into drive A and type:
```
accload "compcopy"
```
Press HELP and select COMPCOPY with the appropriate function key. Now press G to load the entire contents of the folder into the ST's memory. You will then be prompted for a blank disc, which should be placed into drive A, and the compiler files will be copied onto your new disc.
Depending on the format of your drive, you will be left with either 200k or 480k on each disc. This should prove more than adequate for all but the largest STOS programs. Despite this, it's still possible that you will occasionally run out of memory. See the troubleshooting section at the end of this chapter if you have any problems.
Incidentally, the STOS compiler does NOT allow you to swap discs during the compilation process. So don't try to compile a program from drive A to drive B if you are limited to a single drive.
**Using the compiler on a 512k ST with two floppy drives**
When you use the compiler, place a disc containing the COMPILER folder into drive A, and your program disc in drive B. This will provide you with plenty of disc space to compile even the largest STOS programs.
Using the compiler with a ramdisc (1040ST or higher)
You can increase the speed of the STOS Basic compiler significantly by copying the contents of the COMPILER folder onto a ramdisc. We have therefore included a special STOS-compatible ramdisc along with the compiler. This can be created using the STOSRAM.AC6 accessory from the disc.
1. Load STOSRAM from the compiler disc with the line:
`eccnew:accload "stosram.ecb"`
Enter the accessory by pressing <HELP> and then <F1>.
2. Choose the size of your ramdisc by pressing the S key and entering the number of kilobytes you require. Note that the minimum space required to hold the entire contents of the compiler folder is 150k, and this is why the default setting is 150.
3. You must now set the full path name of the folder which will be loaded from the disc during initialisation. This can be done with the C option. We have set the default to A:\COMPILER but you can set it to any other name you require.
4. Finally, insert a disc containing both STOS Basic, and the COMPILER folder into drive A. Now hit G to add a ramdisc to the existing AUTO folder. This ramdisc will subsequently be created whenever STOS Basic is loaded. On start-up the entire contents of the compiler folder will be automatically copied to the new ramdisc.
The speed increase to be gained from using the compiler in this way is literally staggering. Typical compilation speeds are an amazing 10k per second. This means that the BULLET program from the STOS GAME disc can be compiled in well under 15 seconds! For further details of the STOSRAM accessory see chapter 5.
For users with a single density disc drive it’s important to realise that the STOS language disc and the COMPILER folder won’t both fit on a 320k disc. To solve this, copy the STOS language disc first and remove one of the picture files from the STOS folder. If you’re using a colour monitor then remove the PIC.P13 file otherwise remove the PIC.P11 picture. This will ensure that enough space is available for copying the COMPILER folder.
Using the compiler with a hard disc
The default path name for the compiler folder is normally set from the first line of the compiler accessory. This can be changed to any directory you wish by editing the accessory like so:
```
load "compiler.ecb"
10 COMPATHS="D:\STOS\UTILITY":rem Example path same
exit"compiler.ecb"
```
You can now copy over the COMPILER folder into the required directory on your hard disc. Note that if the COMPTEST string is empty (default), the accessory uses the following strategy to find the COMPILER folder:
1. The current directory is searched.
2. The accessory checks the root directory of the present drive.
3. The root directories of drives any available drives from C upwards are examined.
# 2: The Compiler accessory
If you found the installation procedure rather cumbersome, you will be delighted to hear that
the compilation process is simplicity itself. You begin by booting up STOS Basic using the
new configuration file. This will automatically load the COMPIlER accessory into the ST's
memory during initialisation.
Alternatively you can also load the accessory directly from the compiler disc using the
line:
```
accload "compiler"
```
You can now enter the compiler accessory from the <HELP> menu in the normal way. The
following control panel will be displayed on the ST's screen:
![Compiler Control Panel]
The main features of the compiler are controlled through a set of five "buttons". These can
be activated by simply moving the mouse pointer over the appropriate area and clicking once
on the left mouse key.
- **SOURCE**
- The SOURCE button is used to determine whether a program is to be compiled either from
memory or the current disc. Clicking on the box underneath toggles between the two
possibilities.
- **MEMORY**
- This option informs the compiler that you wish to compile the program you are
currently editing. Any of the four program segments may be compiled independently without
affecting the contents of the others. Compiling from memory is very fast, but it does consume
a large amount of memory.
- **DISC**
- Some programs are simply too large to be compiled directly from memory. In these
circumstances it's convenient to compile a program from a file on the disc. Obviously this is
slower than the memory option, but most STOS programs can still easily be compiled within
a matter of minutes. Before using this feature, remember to ensure that the COMPILER
folder is accessible from the current drive. Also note that the DISC option will be selected
automatically whenever memory is running short.
**DEST**
The DEST button selects the eventual destination of the compiled program. Programs may be compiled either to memory or directly into a file on the disc.
**MEMORY** This option compiles the program into memory. Note that the memory used by this feature is completely separate from your original program. So you can subsequently save the compiled program onto the disc without erasing your current STOS Basic program.
**DISC** If you choose the DISC as the destination, the code will be compiled straight into a file without taking up any valuable memory. Since it's much slower than the MEMORY directive, it's only really suitable for compiling particularly large STOS Basic programs.
**COMPILE**
The compilation process is started when you click on the COMPILE button with the mouse. As your program is compiled, a horizontal bar grows across the screen. When this completes its journey, the completion has been successfully concluded. But if an error is detected, the compiler will terminate and you will be returned to the STOS Basic editor.
Occasionally, errors will be generated from supposedly bug-free programs like Zoltar. The reason for these errors is that the interpreter is only capable of detecting an error in the line which is currently being run. So if an error exists in a section of code which is rarely executed, it can easily be missed. Since the compiler tests the whole program, rather than just the current line, it is able to discover all the syntax errors in your program at once.
**QUIT**
Exits from the compiler accessory and returns you to the editor.
**DISC**
This button allows you to choose whether the compiled program is to be run either from STOS Basic, or directly from the Gem Desktop.
**BASIC**
This is the default, and generates a compiled program which can only be run within the STOS Basic system. Files produced with this option have the extension "CMP".
**GEM**
The GEM directive allows you to create a program which can be run independently of the STOS Basic system. These programs have the extension "PRG", and can only be executed from the Gem Desktop. Furthermore, since they consist entirely of machine code, they cannot be listed or amended from STOS Basic. Programs in this format can be sold or distributed in any way you like. Depending on the facilities used, the Gem run version of a file will be between 40 and 80k larger than the equivalent STOS run program.
OPTIONS
Whenever the compiler is loaded, a number of configuration settings are read from a special OPTIONS.INF file in the COMPILER folder. These settings provide you with the ability to fine-tune the eventual program to your particular needs. They can be changed at any time by simply clicking on the OPTIONS button from the compiler menu which displays the following screen:
![Compiler Options Menu]
COMPILER TESTS
This option is used to set the frequency of certain internal checks. Although the coordinates of a STOS sprite are updated using interrupts, the sprites on the screen are only moved prior to the execution of a Basic instruction. While this is happening, STOS also checks for CONTROL+C, and tests whether the pull-down menus have been accessed by the user.
**COMPILER TEST OFF**: This completely removes the tests from the compiled code. The result is that the compiled program ignores CONTROL+C, refuses to open your menus, and does not automatically update your sprites when they are moved. Set against this, however, is the fact that the final code will be around 10% faster.
**COMPILER TEST NORMAL**: A check is performed before every branch such as GOTO, NEXT, REPEAT, WEND, ELSE and THEN. Tests are also carried out prior to slow instructions such as PRINT and INPUT. This setting is used as the default.
**COMPILER TEST ALWAYS**: Adds a test before every STOS Basic instruction, leading to particularly smooth sprite movements. As you would expect, programs compiled in this way are slightly slower than with NORMAL or OFF settings.
Note that these settings can also be changed directly from your STOS Basic programs. See Chapter 5 for further details.
Gem-run options
These options allow you to tailor the default environment of a compiled program which is to be run from the Desktop. They have no effect on any programs compiled for use within STOS Basic.
RESOLUTION MODE: This directive allows you to select between low or medium resolution when your program is executed from the Desktop using a colour monitor. To change the resolution simply click on the appropriate icon. Note that if your program is subsequently run on a monochrome monitor, this option is completely ignored.
BLACK AND WHITE ENVIRONMENT: Chooses between normal or inverse graphics when a Gem-run program is executed on a monochrome monitor.
NORMAL Uses white text on a black background
INVERSE Produces a “paper white” display.
DEFAULT PALETTE: This allows you to assign the colours which will be initially used for your graphics.
The first icons select one of the 16 possible colours to be set (4 in medium resolution)
Click on this box to increment the colour number by one
Click here to decrement the colour number by one
The rightmost icon buttons set the exact hue of this colour.
Click on a “+” to add one to the red, green or blue component to the colour respectively
“-” buttons subtract one from the appropriate colour value.
For speed you can quickly step through the values by pressing in the right mouse button, but for subtle single steps use the left button. This applies to any other option that uses ‘-’ and ‘+’ icons.
FUNCTION KEYS: The window used for the STOS function key assignments will normally be drawn on the screen during the initialisation process. This adds a slightly unprofessional feel to your finished program. You can avoid this effect using the following directive from the compiler menu.
ON The function key window is automatically drawn at the top of the screen during initialisation.
OFF The function key window is omitted.
CURSOR: Activates or deactivates the text cursor when the program is initialised. This setting can be changed at any time from within your compiled program using the CURS ON/OFF commands.
MOUSE: The MOUSE option allows you to decide whether the mouse will be turned on or off as a default. As you might expect, you can reactivate the mouse within your program using the STOS Basic SHOW instruction.
**Language:** Toggles the language used for any system messages between ENGLISH and FRENCH.
**Main Menu:** Returns you to the main compiler menu.
**Next Page:** Displays the next page of options. (See below)
**Load Options:** Loads an existing set of options from an OPTIONS.INF file from the disc.
**Save Options:** Saves the current options to an OPTIONS.INF file in the COMPILER directory.
**Loaded Character Sets:** The compiled program normally includes all three of the STOS character sets. But if you only intend to run your program in a single resolution, the character sets used by the remaining modes will waste valuable memory. The STOS compiler therefore lets you select precisely which character sets are to be loaded.
*Warning! Any attempt to run your program in the wrong resolution after this option has been set, will crash the ST completely.*
**Loaded Mouse Pointers:** As with the character sets, you can use this option to omit the data used by the mouse pointers in the resolutions your program will not be using. But be careful, as improper use of this command can crash the ST!
**Window Buffer Size:** The windowing system used by STOS Basic, normally keeps a copy of the entire contents of all the windows which your program has defined. If your program doesn’t use windows, then this memory will be completely wasted. The default setting is 32k. This can be altered in 1k steps by clicking on the "++" and "--" boxes. You can calculate the memory needed by your windows using the following simple rules:
- Each character position takes up two bytes in medium/high resolution and four bytes in low resolution. In low resolution the main STOS screen holds 1000 characters, so the memory taken by this screen is 1000*4 or 4000 bytes.
- If you change this setting, don’t forget about the function key window, and the tile selector! These use about 8k of memory.
**Sprite Buffer Size:** Before a STOS sprite is copied to the screen, it is first drawn into a separate memory buffer. If you are really pressed for space and you are only using the smallest sprites, you can reduce this buffer to around 1k.
*Warning! This option is extremely dangerous. Do not use it unless you know precisely what you are doing, or the ST will almost certainly crash!*
I'll now go through the process of using the compiler in a little more detail. You start off by loading the compiler accessory into memory with a line like:
```
accload "compile"
```
Now enter the following STOS program into your computer:
```
10 timer=0:for i=1 to 10000:next i
20 print "Loop took ",timer/50.0," seconds"
```
This program will take approximately seven seconds to run using interpreted STOS Basic. Insert a disc containing the compiler folder in drive A, and enter the accessory menu with the <HELP> key. The compiler can now be accessed by selecting the COMPILER accessory.
Move the pointer over the COMPILE button and click on the mouse. The disc will now whir for a few seconds as the required compiler libraries are accessed from the disc. As the program is translated, the progress of the compiler is represented by a horizontal bar. When this reaches the edge of the screen, the compilation process has been successfully completed. You will now be presented with the option to grab your finished program into one of the available program segments.
Position the mouse pointer over the first free segment and click on the left button. Note that selecting Save from this menu displays a standard STOS file selector which can be used to save your compiled program straight to the disc. The compiler will now grab the compiled program into the free program area selected.
Compiled programs are executed using the familiar RUN command from STOS Basic, so just type RUN<RETURN> This performs in around three seconds which is over twice the speed of the interpreted version.
Incidentally, since the program has been converted into machine code, any attempt to generate a listing will produce the response:
```
COMPILED PROGRAM
Don't change line 65535!
```
Line 65535 contains a special instruction which executes the compiled program stored in the ST's memory. Removing this line will effectively destroy your compiled program. It's theoretically possible to incorporate separate lines of interpreted Basic into the compiled program. This practice is not however, recommended.
Note that compiled programs can also be accessed using the normal LOAD and SAVE instructions. If you wanted to save your current program, you could therefore type something like:
```
save "loop.cmp"
```
One unique feature of the STOS compiler, is that you can keep your interpreted program in memory while you are debugging the compiled version. Whenever a bug is detected, you can then effortlessly flick back to the Basic code, and make the appropriate changes. This code can be subsequently re-compiled in a matter of seconds, without having to leave the STOS Basic environment at all.
The previous example was relatively trivial. I'll now show you how a full-sized game can be compiled with this system.
Place the STOS games disc into drive A and type:
```
dir:rem Update current disc directory
dir$="bullet":rem Enter Bullet directory
load "bullet.bas":rem Load Bullet
```
Now insert a copy of the compiler disc into drive A. If you're using an unexpanded 520 ST, this disc should have been created previously using the COMPCOPY program I mentioned earlier.
First call the COMPILER accessory from the <HELP> menu in the normal way. When the main screen appears, click on the button immediately below DEST. This will force the compiled program to be generated directly onto the disc, and may be omitted if you are using a 1040 ST. Now choose the COMPILE option and click once on the left mouse button. You will then be presented with a standard STOS file selector which prompts you for the name of your compiled program. Enter a name like "BULLET.CMP".
After a few minutes, the compilation process will be completed, and you will be returned to the compiler screen. Exit from the accessory using the QUIT option and type:
```
accdead:rem Remove all accessories (only needed for 520 users)
load "bullet.cmp"
```
Now place a copy of the STOS Games disc and enter the lines:
```
dir:rem Update directory
dir$="bullet"
rem
```
Your newly compiled version of Bullet Train will now execute in the usual way. As you can see, compiled programs are much faster than interpreted ones!
So far, I've only shown you how to create a compiled program for use within STOS Basic. But the ability to generate Gem runnable programs is much more exciting as it enables you to distribute your work with none of the protection problems encountered with a run-only interpreted program.
I'll begin with a small example which displays a Neochrome picture on the ST's screen. Type in the following program:
```
5 mode 0:flash off
10 FS=file selects("*.NEO","Display a NEOCHROME screen")
20 if FS=" " or len(FS)<5 then end
30 if right$(FS,4)<>".NEO" then boom:goto 10
40 hide:load FS.back:rem Load screen
50 wait key:show
```
Put your working copy of the compiler disc into drive A and call up the compiler from the <HELP> menu. Now click on the box marked BASIC. The title of this box should immediately change to GEM. You can now start the compilation process with the COMPILE button. After a short while, you will be prompted for a filename for your new program. This file will then be written to the disc and the compilation process will be concluded.
If you wish to test this program, you will need to leave STOS completely and execute it from the Gem Desktop. Don't forget to save your original program first!
On average, Gem-run programs are about 40k larger than the equivalent compiled program. The reason for the increase in size is that Gem-run programs have to be completely self-sufficient. This requires them to incorporate large segments of the appropriate compiler libraries.
Gem-run programs can be run directly from the Desktop like any other program. They do not need any support from the rest of the STOS system. Note that once you have compiled a program in this format, it cannot be subsequently executed from the STOS Basic system. You should therefore always retain a copy of your program in its original interpreted form.
Finally, I'll provide you with a full-sized example of a Gem-run program. Place a disc containing the sprite editor definer into drive A and load it with:
```
dir:rem Update current directory
load "sprite.acb"
```
Enter a disc containing the compiler libraries into drive A. Since this is very large program there won’t be enough space to compile it directly into memory on a 520 ST. In this case you should specify compilation from memory to disc by clicking on the button below DEST. Now toggle the BASIC-> icon to GEM->, and select COMPIL. The disc will be accessed for a couple of minutes as the sprite editor is compiled onto the disc.
You have now produced a complete stand-alone version of the sprite editor which can be run independently of the STOS system. This might well prove extremely useful, especially when you are importing graphics directly from the Neochrome or Degas drawing packages.
4: Troubleshooting
Although you are unlikely to encounter any major problems when using the compiler, it’s still possible that an unforeseen difficulty could occur at one time or another. We’ve therefore provided you with a comprehensive troubleshooting guide which will help you through most of the more common errors.
**The compiler generates an out of memory error**
This can happen if you are trying to compile a large (100k+) program on an unexpanded 520 ST. The compiler provides you with four different options which can be used to conserve memory. Here is a list of the various possibilities in descending order of speed:
<table>
<thead>
<tr>
<th>SOURCE</th>
<th>DESTINATION</th>
<th>COMMENTS</th>
</tr>
</thead>
<tbody>
<tr>
<td>MEMORY</td>
<td>MEMORY</td>
<td>Very fast but uses the maximum amount of memory</td>
</tr>
<tr>
<td>DISC</td>
<td>MEMORY</td>
<td>Slower but uses considerably less memory</td>
</tr>
<tr>
<td>MEMORY</td>
<td>DISC</td>
<td>Slightly slower than disc to memory but the memory usage can occasionally be less</td>
</tr>
<tr>
<td>DISC</td>
<td>DISC</td>
<td>Uses very little memory.</td>
</tr>
</tbody>
</table>
The only limit to the size of your program is the amount of available disc space. This is quite slow on a single floppy. When you get an out of memory error, you should try each of the above options in turn. If you still have problems, you will need to reduce the size of your program in some way.
The easiest solution is to get rid of the permanent memory banks which are used by the program. These can be defined during program initialisation using the RESERVE command and loaded separately from the disc. Your program's initialisation phase will now include the following steps.
1 Define each screen bank with RESERVE AS SCREEN
2 Load these screens from the disc with LOAD.
3 Define any DATA banks in the original program as WORK banks. Use the RESERVE AS WORK command for this purpose.
4 Load the external date from the disc
Since a large percentage of the space used by many STOS programs is taken up by the memory banks, this technique can lead to dramatic reductions in size, without noticeably affecting the program's performance.
Another idea is to split your program into several parts, and load these into memory with RUN when required. This technique is known as overlay programming, and is commonly used in commercial games. If you do use this approach, you will need to remember to compile each program module separately. Don't try to combine interpreted modules with compiled modules or your program will fail when run from the desktop.
The compiler returns an UNDIMENSIONED ARRAY ERROR for an array which has apparently been correctly dimensioned.
The compiler requires the DIM statement to occur in the listing BEFORE the arrays are used. Take the following example.
```
10 gosub 1000
20 a(10)=50
30 end
1000 dim A(100):return
```
This causes an error because when the compiler checks line 20, it has yet to encounter the DIM statement at line 1000. It therefore generates an erroneous error message. The solution to this problem is simply to dimension all arrays at the start of the program. So you can fix the above routine by replacing line 10 with:
```
10 dim a(100)
```
You get a syntax error at an ON...GOTO or ON...GOSUB statement
In order to optimise the speed of the compiler, the line numbers used by ON GOTO and ON GOSUB are required to use constants rather than variables.
So a line like:
```
on A goto 1000,10000+A*5,500
```
will produce a syntax error. This should be replaced by:
A previously error-free program returns a syntax error when compiled
This happens quite often, and is simply a reflection of the improved sensitivity of the compiler to genuine syntax errors. Take the following program:
```
10 print "hi there"
20 goto 10
30 print "This is an error"
```
If you try to run this program using the interpreter, then the spelling mistake at line 30 will be missed, since it is never actually executed. But if you compile it, the compiler will detect the error immediately and ask you to correct it.
Problems occur when you try to compile a program using certain extension commands
Any extensions which are to be compiled need to have a separate extension file for the compiler. This has the extension ".ECN" where N is the identifier of the extension file. The appropriate file will normally be included along with the extensions, and should always be placed in the COMPILER folder.
The colours of a Gem-run program are different from the interpreted version
This problem can occur if you have been altering the default colour settings using the options menu. Remember that when these are saved to the disc, they effect all subsequent compilation. Correct by simply restoring the standard options from the original compiler disc.
A program which reserves a memory bank within a FOR...NEXT loop crashes inexplicably
A program which creates a memory bank within a FOR...NEXT loop will behave unpredictably if the bank number is held in an array. This could lead to a total crash of the STOS Basic system. The reasons for these problems are complex, but the sort of code to watch out for is:
```
10 dim b(15)
20 for b(3)=1 to 10
30 reserve as screen b(3)
40 next b(3)
```
The difficulty can be avoided by either using a simple variable as the index, or defining the banks explicitly outside the FOR...NEXT. For example:
```
20 for i=1 to 10
30 reserve as screen i
40 next i
```
5: Compiler extension commands
The compiler adds three extended commands to the normal STOS Basic system. These commands are only used in a compiled program. They have no effect whatsoever when the code is interpreted.
In a normal STOS Basic program the following tests are performed at regular intervals.
- Sprite updates.
- Menu checks.
- Control+ C tests.
The COMPTEST instructions provide you with fine control over the testing process.
**COMPTEST ON**
Checks are only carried out before jump instructions such as GOTO and WHILE, and especially slow commands like PRINT or WAIT. Note that COMPTEST ON is the default setting used by interpreted programs.
**COMPTEST OFF**
The COMPTEST OFF command stops the testing completely, improving the speed of the program by up to 10%. This allows you to optimize time critical sections of a compiled program. It is particularly useful for routines which have to perform large numbers of complex calculations in a relatively short space of time. Typical examples of such programs include 3D graphics packages and fractal generators. One dangerous side effect of this command is that it is impossible to interrupt a program until the compiler tests are restored. So try to get into the habit of saving your current program before calling this function. Otherwise, an infinite loop could lock up the system completely, losing all your valuable data.
Example:
```
10 dim a(10000), b(10000)
20 for i=0 to 10000: a(i)=i: next i: rem Load an array
30 comptest off: timer=0: print "Compiler test off"
40 for i=0 to 10000: b(i)=a(i): next i
50 print "Loop executed in ", timer/50.0, " seconds"
60 comptest on: timer=0: print "Compiler test on"
70 for i=0 to 10000: b(i)=a(i): next i
80 print "Loop executed in ", timer/50.0, " seconds"
```
Try stopping the program with Control+C after the compiler tests have been switched off. The program will terminate around line 60, since this is the first time the Control+C test has been performed.
**COMPTEST ALWAYS**
This adds a test before each and every STOS Basic instruction. It results in slightly smoother sprite movement, and finer control over the menus. The precise effect of this command will entirely depend on the mixture of instructions in your program. If your program makes heavy use of instructions such as GOTO and FOR.. NEXT, the difference will be barely noticeable. But if your routine will be performing extensive calculations while also using the sprite commands, this instruction could prove invaluable.
6: The new floating point routines
When STOS Basic was first designed, it used the latest IEEE standard for its floating point numbers. This allowed your program to use numbers between -1.797692 E+308 and +1.797693 E+307. These numbers were accurate to 16 decimal digits.
However it was quickly discovered that few users really needed this level of accuracy. The vast majority of arcade games don't use real numbers at all, and restrict themselves to integer arithmetic for the maximum speed. Furthermore, any programs which do need floating point operations usually require them to be performed extremely quickly. This is especially true of programs which generate 3D graphic effects.
After much thought, we have therefore decided to replace the existing format with the faster single precision. The new system allows a floating point number to range between 1E-14 to 1E+15, and precision is now limited to seven significant digits. This should be more than adequate for the vast majority of programmers.
The speed improvement when using the new format is extremely impressive. All floating point operations are approximately three times faster, with trigonometric functions like SIN and COS being performed at more than 30 times their earlier speed! This applies equally well to both interpreted and compiled programs.
Note that this compiler is currently only compatible with the new system. So unless you genuinely need to use double precision arithmetic in your programs, you should upgrade all your copies of STOS Basic to version 2.4 immediately. See the installation guide for further details of this process.
Incidentally, if you try to list any of your existing programs which use real numbers, the following text will be displayed on the screen.
BAD FLOAT TRAP
In order to allow you to run these programs from STOS v2.4 we have included a useful little utility called CONVERT.BAS which will automatically transform your programs into the correct format.
You can call this program from the compiler disc by typing:
run "convert.bas"
You will now be prompted for one of your STOS V2.3 programs. Insert the appropriate disc in drive A and select your program using the STOS file selector. This program will then be quickly converted into STOS V2.4 format, and will be copied back to the original file. It's a good idea to perform this conversion process for every one of your STOS Basic programs which use real numbers. This will avoid the risk of confusion in the future.
7: Technical details
In this section we will be discussing a range of advanced topics which will be especially relevant to those with a little programming experience.
Improved garbage collection
The problem of "garbage collection" arises in any language which allows the user to manipulate variable-sized pieces of information. The classic example of this problem in STOS Basic occurs with strings. Take the following small Basic program:
```
10 input a$: rem Input string
20 a$=a$+a$ rem Double the length of the string
30 b$=a$-" ": rem Subtract all spaces from the string
40 c$=left$(a$,3)
50 print a$
60 goto 40
```
The above program may look extremely simple, but underlying all this casual string manipulation, the STOS interpreter is performing a frenetic amount of activity.
Like all variables, the characters in a string need to be stored somewhere specific in the Atari ST's memory. But what actually happens if you increase the size of a string? The system can't just tack the extra characters on at the end as this will overwrite any other variables which have been positioned immediately after it.
One solution would be to move the entire list of variables so as to create the correct number of spaces at the end of the string. In practice however, this would prove incredibly slow.
It's much easier to simply define a new string with the same name, and then insert it at the next free memory location. Of course, the characters making up the old string are now totally useless, and are taking up valuable memory space. Another source of potential waste are the intermediate results generated by operations such as "+", "-", and SPACE$. As time goes by, this "garbage" will start to clutter up the ST's entire memory. Eventually the STOS system will be forced to totally reorganize the ST's memory to recover the unused space for your program. This process is known as garbage collection.
Since it's impossible to predict when the memory will finally run out, garbage collection can occur at wildly unpredictable intervals. Furthermore, in extreme cases, the process can take up to several whole minutes to complete. This can lead to sudden and inexplicable delays in the execution of your program. The worst problems occur with programs which perform a large amount of string manipulation such as adventure games. Fortunately, the STOS compiler provides you with the perfect solution to this problem.
Enter the following program:
```
19 dim a$(5000)
20 for x=0 to 5000
30 a$(x)=space$(3)+" a"+space$(2): rem This generates a lot of garbage
40 home:print x:
50 next x
100 timer=0
119 print free: rem Force a garbage collection
120 print timer/50.0
```
If you run this program using STOS Basic v2.3 the garbage collection will take several minutes. Now try it on STOS Basic v2.4. You will be delighted to discover that the entire process occurs almost instantaneously. This is because, when François Lionet created this compiler, he cleverly optimised all the garbage collection routines for maximum speed. So garbage collection will never be a potential problem for one of your compiled programs.
The compiler
The STOS Basic compiler was designed to use as little memory as possible. In fact, most of the memory needed by the system is borrowed from the sprite background screen. That's why the mouse disappears while the compiler is executing.
How the compiler works
The compiler first reserves some memory in the current background screen. It then looks into the COMPILER folder and opens the main Basic library (BASIC204.LIB). The addresses of all the appropriate library routines are now loaded into the ST's memory. The next section, is to check for the existence of an extension file ending in .EC. These contain all the Information required to compile the BASIC commands added by an extension. Whenever an extension is discovered, a full catalogue of the additional routines is then added to the current list. The execution of the compiler is split into three separate phases which are known as passes.
PASS 0 The first pass checks the STOS Basic program for possible syntax errors, and makes an initial attempt at converting the program into machine code. While it does this, it produces a full list of all the library routines which will be called by the program. Note that no actual code is actually generated by this pass as the intention is merely to estimate the final size of the compiled program. The compiler can now safely reserve the precise amount of space which will be needed, without wasting valuable memory.
PASS 1 Analyses the Basic program using exactly the same method as pass 0. It then converts the entire STOS Basic program into machine code, and copies this data to either memory or the disc. At the same time it also creates a number of tables including the relocation table.
The compiler now incorporates any library routines which are accessed from the compiled program. It is important to note that only the routines which are actually used by the program will be included in the final code. This reduces the size of the compiled program to the absolute minimum. The following steps are then performed by the compiler in quick succession:
1 If an extension command is used in the program, the extension libraries are searched and the appropriate routines are written into the compiled program.
2 The relocation table is copied into the program. This allows the compiled program to be executed anywhere in the ST's memory.
3 The table used to hold the address of the program lines is then added.
4 Any string constants which are used are added onto the end of the program.
5 If the program is to be run from Gem, the compiler copies over the various library routines needed by the sprite, windows, menus, music and floating point arithmetic. These add approximately 40k to the length of the program.
PASS 2 This pass simply explores the relocation table created in pass 1, and sets the required addresses in the compiled program. The compiler now closes all the open disc files, and transfers the program to the current program segment if required.
Note that the eventual size of a compiled program depends entirely on the precise mix of STOS instructions which are used. There's no real relationship between the complexity of the program and the size of the code. In practice, some of the simplest Basic instructions proved to be the hardest to actually write. A good example of this is the STOS tile selector, which involves over 4k of machine code.
You can see below the machine code produced by the compiler from a simple plot command. The Basic listing:
```plaintext
plot 320,100,1
```
The compiled program:
```plaintext
move.l #1,-(a6)
move.l #100,-(a6)
move.l #320,-(a6)
jsr plot
```
The subroutine 'plot' is a library routine which will be merged into the compiled program.
STOS-run
STOS-run programs have the standard Basic header and fake line at 65535 with a single instruction which calls the compiled program. The memory banks are handled by the editor in exactly the same way as for a normal interpreted program. This means you can BGRAB or SAVE them in the usual way. Incidentally, it's also possible to execute a compiled program as a STOS accessory. In order to do this, load them into memory and resave them with the extension "ACB".
Gem-run
GEM-run programs have the same header format as TOS files. There is, however, no relocation table for TOS, and the relocation address points to an empty table. Instead of this, the beginning of the program is written in PC relative code. This contains a small routine which relocates the main program using the STOS relocation table.
The first thing a GEM-run program does is to initialise the memory banks which are normally chained to the compiled program. It starts by finding the address of the top of memory using location $42E. It then subtracts 64k for the screens and moves the memory banks (if present) to the end of memory. This process provides the compiled program with plenty of free space to work with, situated between the end of the program and the beginning of the memory banks.
The program then sets up the standard STOS environment. It first initialises all the TRAP routines (sprites, windows, music). Then it activates the normal STOS interrupts and kills the interrupts used by GEM.
Finally it erases the screen, activates the mouse pointer and starts executing the compiled program.
8: Utility programs
This package includes a number of small accessory programs for your use.
The ramdisc accessory
The STOSRAM program allows you to create a ramdisc of any size up to the maximum available ram. It is especially useful for 1040 users who can copy the COMPILER folder into memory, speeding up the compilation process significantly. See the section on installation for more details.
The action of the accessory is to add a separate STOSRAM PRG program to your current AUTO folder. This will be executed every time STOS Basic is subsequently run, and will be automatically loaded with the contents of any folder on the current disc. Here is a list of the possible options.
- `<A>` or `<B>` Sets the drive on which the ramdisc program will be installed.
- `<S>` Chooses the size of the ramdisc. The default is 150k, which is just right for the contents of the COMPILER folder. When this option is selected you will be requested to input the ramdisc's size. This number should be entered in units of a kilobyte.
- `<C>` Selects the path name for the folder which is to be loaded into the ramdisc on start-up. If this string is empty, or the folder you have requested cannot be found, then the ramdisc will be left vacant. Note that only the individual files in the directory can be copied not entire folders.
- `<G>` Creates a new ramdisc using the options you have previously set. The STOSRAM PRG program is now copied into the current AUTO folder on the disc. If an AUTO folder doesn't currently exist, then one will be created. An important point to remember is that the ramdisc won't be removed from memory by resetting the computer—it must be completely turned off. If you reset and boot up STOS another ramdisc will be created.
The disc formatter accessory
The FORMAT accessory enables you to format a disc directly from STOS Basic. Discs can be formatted using the following options:
- `<A>` or `<B>` Selects current drive.
- `<1>` or `<2>` This toggles between 360k single sided format (1) and 720k double-sided format (2).
- `<G>` Formats the disc in the current drive.
|
{"Source-Url": "https://ia800909.us.archive.org/31/items/AtariSTManuals_201812/STOS%20Compiler_text.pdf", "len_cl100k_base": 11539, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 47842, "total-output-tokens": 12897, "length": "2e13", "weborganizer": {"__label__adult": 0.0005445480346679688, "__label__art_design": 0.0005154609680175781, "__label__crime_law": 0.0002574920654296875, "__label__education_jobs": 0.0004172325134277344, "__label__entertainment": 0.00018656253814697263, "__label__fashion_beauty": 0.0002359151840209961, "__label__finance_business": 0.0002493858337402344, "__label__food_dining": 0.0003464221954345703, "__label__games": 0.01556396484375, "__label__hardware": 0.0019893646240234375, "__label__health": 0.0001723766326904297, "__label__history": 0.00017976760864257812, "__label__home_hobbies": 0.00013959407806396484, "__label__industrial": 0.0003554821014404297, "__label__literature": 0.0002689361572265625, "__label__politics": 0.00012564659118652344, "__label__religion": 0.0005822181701660156, "__label__science_tech": 0.0031337738037109375, "__label__social_life": 7.164478302001953e-05, "__label__software": 0.03875732421875, "__label__software_dev": 0.93505859375, "__label__sports_fitness": 0.00036406517028808594, "__label__transportation": 0.0002722740173339844, "__label__travel": 0.00019550323486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53569, 0.03119]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53569, 0.35256]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53569, 0.91248]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 366, false], [366, 2042, null], [2042, 5162, null], [5162, 7653, null], [7653, 9562, null], [9562, 12244, null], [12244, 15025, null], [15025, 16889, null], [16889, 19325, null], [19325, 21010, null], [21010, 23302, null], [23302, 25583, null], [25583, 28277, null], [28277, 30995, null], [30995, 33676, null], [33676, 36102, null], [36102, 38018, null], [38018, 40532, null], [40532, 43028, null], [43028, 45704, null], [45704, 48893, null], [48893, 51472, null], [51472, 53569, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 366, true], [366, 2042, null], [2042, 5162, null], [5162, 7653, null], [7653, 9562, null], [9562, 12244, null], [12244, 15025, null], [15025, 16889, null], [16889, 19325, null], [19325, 21010, null], [21010, 23302, null], [23302, 25583, null], [25583, 28277, null], [28277, 30995, null], [30995, 33676, null], [33676, 36102, null], [36102, 38018, null], [38018, 40532, null], [40532, 43028, null], [43028, 45704, null], [45704, 48893, null], [48893, 51472, null], [51472, 53569, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53569, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53569, null]], "pdf_page_numbers": [[0, 0, 1], [0, 366, 2], [366, 2042, 3], [2042, 5162, 4], [5162, 7653, 5], [7653, 9562, 6], [9562, 12244, 7], [12244, 15025, 8], [15025, 16889, 9], [16889, 19325, 10], [19325, 21010, 11], [21010, 23302, 12], [23302, 25583, 13], [25583, 28277, 14], [28277, 30995, 15], [30995, 33676, 16], [33676, 36102, 17], [36102, 38018, 18], [38018, 40532, 19], [40532, 43028, 20], [43028, 45704, 21], [45704, 48893, 22], [48893, 51472, 23], [51472, 53569, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53569, 0.01316]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
8642554e71a25273348704e7ae77ee42bba7db42
|
Coherence Planning: From Proof of Concept to Production
An Oracle White Paper
November 2008
Coherence Planning:
From Proof of Concept
to Production
Introduction ....................................................................................................... 3
Preparing to Test ............................................................................................... 3
Architecture ................................................................................................... 3
“Stretch Clusters” and the “Split Brain” scenario ........................................... 4
Multi-Threading and Thread Safe Operations ..................................... 4
Development Techniques ....................................................................... 6
Performance and Scalability............................................................................. 6
Support Tools ................................................................................................ 9
Management and Monitoring ................................................................. 9
Sizing ............................................................................................................. 10
How do you determine how big your cache needs to be? .............. 10
How do you limit the size of your cache? ........................................... 13
How many JVM’s? ............................................................................... 14
Configuration Management....................................................................... 15
Security ......................................................................................................... 16
Change Control........................................................................................... 16
How do I change my configuration on the fly? ............................... 16
How do I change my cache objects in a production system?.......... 16
How do I keep my .NET, C++ and Java code synchronized? ...... 17
Testing............................................................................................................... 18
Testing Strategies ........................................................................................ 18
What to test? ............................................................................................... 18
What to measure? ....................................................................................... 18
How to test? ................................................................................................. 18
Example Proof of Concept (PoC) Tests ................................................. 19
Measuring Latency and Throughput ................................................... 19
Scalability .................................................................................................. 19
Data Reliability........................................................................................ 19
Destructive Testing................................................................................ 19
Deploying to Production................................................................................ 20
Transitioning from Test to Production................................................... 20
Roadmap .................................................................................................... 20
Conclusion...................................................................................................... 22
INTRODUCTION
Whilst setting up a development environment for Coherence is relatively trivial, planning and moving this into production requires considerable testing and careful consideration to ensure the full benefits of Coherence are realized. The following observations and guidelines are meant to supplement those outlined in the Coherence Production Checklist, Performance Tuning and Best Practices guides, not to replace them. These documents should be read prior to reading this document, as the contents of these documents are not going to be replicated here.
PREPARING TO TEST
Architecture
For virtually all data caching scenarios the distributed cache scheme (i.e. Partitioned Cache) is the best option, because it provides much better scalability and is suitable for a wide range of use cases – read-only, read-mostly, read-write, write-mostly and write-intensive. Replicated caching effectively relies on scaling up, not out, to provide greater capacity, which can make replicated caches much more expensive from a resource (i.e. memory perspective) on a JVM by JVM basis. Similar performance for reads can also be obtained in many cases by using a near cache in conjunction with a distributed cache.
The level of resilience required by applications is also worth considering when planning your architecture. Not all applications require caching to be resilient to JVM failures, i.e. the data is easily re-creatable from a separate data source if the cached data is not configured to be redundant and a cache node crashes. If backups are not required, more memory will be available and write performance will be improved as it will only take 2 network hops for an update or insert, instead of 4 (2 to the primary and 2 to the backup).
For example, if no backups are used an insert only results in 1 network hop from the client to the cache node where the primary copy of the data resides and 1 network hop back to the client. If backups are used then before the response can be sent to the client the backup copy of the primary data on another cache node
needs to be updated too, resulting in 1 hop to the backup node and 1 hop back, or 2 additional network hops.
"Stretch Clusters" and the "Split Brain" scenario
To maintain a stable and resilient cluster Coherence relies on fast and reliable network connectivity between cluster nodes. Like all clustering technologies this is crucial to maintaining a single logical cluster. If a cluster is configured to span servers separated by a MAN or WAN as a “stretch cluster”, where the connection is relatively slow, e.g. several milliseconds, or un-reliable, the cluster may operate less predictably and the cluster can even break in two, if the connection is dropped for a prolonged period of time. This is called a “split-brain” scenario, where each part of the cluster at the different sites believes the other site has failed and that it is now the only surviving part of the cluster.
When this happens each part of the cluster will continue to function separately as a separate cluster. If the link returns then the cluster that is determined to have priority will take precedence and the other cluster will throw away its data and then join the more senior cluster. Seniority of the clusters is based on the number of active members in each cluster at the time the two clusters re-establish communication.
The recommended approach to handling unreliable and/or high-latency networks is to replicate changes between sites using the Coherence Extend mechanism and/or the Coherence patterns for messaging. If data is already being replicated, data may just be re-read from the replicated data store. Finally the problem can be avoided by not deploying a “stretch cluster” unless the network connection is both low latency and considered completely reliable.
Multi-Threading and Thread Safe Operations
As Coherence is a multi-threaded environment its important that any application logic is aware of its threading rules. Operations performed on the service threads on cache nodes must take care not to cause a deadlock scenario. Service threads in Coherence may invoke:
- MapListener’s
- Network Filters
- Custom Serialization-De-serialization (e.g. ExternalizableLite)
- BackingMapListener’s
- CacheStore’s
- Query logic such as Aggregator’s, Filter’s, ValueExtractor’s and Comparator’s
- EntryProcessor’s
- MapTrigger’s
• InvocationService Invocable’s
So these should never make re-entrant calls back into their own services.
Service threads can have the following characteristics:
• A service may have three types of threads, a listener thread, a primary service thread and an optional thread pool. All of these can be thought of as "service threads".
• Refresh-ahead and write-behind caches use a special thread to invoke CacheStore, but these threads should also be considered to be service threads (e.g. in the event of eviction in a write-behind cache, CacheStore.remove() will be called by the actual service thread rather than the write-behind thread).
• Not all services have all types of threads.
• In many cases, a service thread may be the thread to invoke a piece of application logic, or it may end up being an application thread.
• Coherence does not always fail-fast when making re-entrant calls.
As an aside, it is especially critical to never spend much time in a client-side MapListener, as there is never more than one of them for a given service.
A service is defined as a unique combination of service type (e.g. Invocation, Replicated, Distributed) and service name. So you can call from a service “Dist-Customers” into one named “Dist-Items”, or from “Dist-Customers” into “Repl-Inventory”.
Service names are configured in the cache configuration file using the services element. Whether the call is local or remote is irrelevant (in the current implementation). In particular, this complicates the use of key association to support efficient assembly of parent-child relationships.
If you use key association to collocate a Parent object with all of its Child objects, you cannot send an EntryProcessor to the parent object and have that EntryProcessor "grab" the (local) Child objects, even though those Child objects are already in-process. You can use direct access to the server-side backing map (which requires advanced knowledge to do safely), or you can run the logic on another service (e.g. Invocation targeted via PartitionedService.getKeyOwner()), and have that service access the data via NamedCache interfaces, or you can place the Child objects on another service which would allow re-entrant calls (but incur network access since there is no affinity between partitions in different cache services, i.e. the partitions could be on separate JVM's).
Using the Invocation service approach is probably the best compromise for most use cases (or embedding the Child objects in the Parent cache entry). Even when re-entrancy is allowed, you need to be very careful to avoid saturating the thread
pool and causing catastrophic deadlock. For example, if service A calls service B, and service B calls service A, there is a possibility that a sufficient number of concurrent calls could fill one of the thread pools, which would cause a form of deadlock. As with traditional locking, using ordered access (e.g. service A can call service B, but not vice versa) can help.
To summarize:
- A->A is never allowed
- A->B->A is technically allowed but is deadlock prone and should not be done
- A->B and B->C and C->A is similarly restricted
- A->B is allowed
- A->B and B->C and A->C is similarly allowed
Further information on Coherence cluster services can be found here.
**Development Techniques**
Ensuring that developers continually test against a representative environment is also crucial so that expectations are set properly about performance, i.e. taking into account network hops, so there is early visibility of any issues related to performance and concurrency, etc.
**PERFORMANCE AND SCALABILITY**
Coherence scales linearly to provide predictable performance as data volumes and client requests increase. Cache sizes can be 100’s of GB and it can support 1000’s of users. Where CPU intensive operations, like cluster-wide events (MapListener’s, NearCache’s, ContinuousQueryCache’s) or cluster-wide requests (those targeted with Filter expressions rather than with one or more primary keys) such as queries and aggregations, are used a higher CPU:MEM ratio may be required.
In newer releases of Coherence (3.4+) objects can be stored in Portable Object Format (POF) and POF-native services can be configured. This can provide a real performance boost, as it enables caches to take objects in POF format and store them in the cluster without any intermediate transformations and vice-versa. Hence, the de-serialization and serialization steps will no longer occur in the proxy service and its CPU requirements will be significantly reduced.
The thread-pool size (thread count) configuration for a service can also impact scalability and performance, and can sometimes be counter intuitive. For instance, if more clients want to access data in a cache it would seem logical to just increase the number of threads in the thread pool associated with the cache service, thereby increasing the number of threads available to handle requests. However, this is not always the case, because the ‘hand-off’ from the service thread to another worker
thread is actually quite expensive. For short operations that don’t involve any IO it is often more efficient for the service thread to process all the requests. Increasing the thread pool size will help increase the throughput and performance of a cluster if cache operations involve any network or disk IO, which will inevitably involve threads entering a ‘wait state’, while waiting for a response. Where a cache store is used or Entry Processors or Aggregators are performing network IO, the number of threads should be increased. The optimum number of threads will vary, depending on the characteristics of the data and operations being performed and should be determined by performance testing.
Although Coherence runs successfully on a wide variety of hardware, OS’s and JVM configurations, not all are equal. For instance, Garbage Collection (GC) can take its toll on performance and a JVM like JRockit Real-Time, which has deterministic garbage collection, could make the behavior of Coherence, from a GC perspective, more predictable. Likewise, different hardware and OS combinations should be considered, as some can be significantly better than others.
If you have or think you have ‘hot data’ or ‘hot spots’– that is data that there is a lot of contention for - then you may improve performance and scalability by refactoring your object structure. ‘Hot data’ can unevenly balance the network traffic, directing more traffic to some nodes than others. To determine if you have ‘hot data’ you can run the HotKeysAgent over your cache. It will provide feedback on access traffic by cache server and object key. See the Best Practice guide for more information about ‘hot data’ and strategies to deal with it.
To maximize performance, from a development perspective, a wide range of techniques can be employed to tune Coherence. Although this document is not intended to be a development guide some techniques to consider are:
- Choose the optimum serialization technique, usually POF in Coherence 3.4+. One set of tests on a low end machine produced the following results (based upon serializing and de-serializing 100k objects with a variety of fields and a nested child object):
<table>
<thead>
<tr>
<th>Test</th>
<th>Serialization Time (ms)</th>
<th>De-Serialization Time (ms)</th>
<th>Size (bytes)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Java</td>
<td>2360</td>
<td>10078</td>
<td>867</td>
</tr>
<tr>
<td>ExternalizationLite</td>
<td>484</td>
<td>1625</td>
<td>309</td>
</tr>
<tr>
<td>XMLBean</td>
<td>734</td>
<td>2070</td>
<td>322</td>
</tr>
<tr>
<td>POF</td>
<td>547</td>
<td>1234</td>
<td>186</td>
</tr>
</tbody>
</table>
- Co-locating related objects using partition affinity. If you have related objects that depend or interact with each other then use KeyAssociation to ensure that they are co-located in the same partition and so the same JVM. This will eliminate any inter-object network traffic from read operations.
When using a query that runs with a Filter, but without a KeyAssociationFilter (or in 3.4, a PartitionedFilter), it will be evaluated on each and every cache server. For a trivial query, that returns (say) 5 rows, this means that the actual query processing is trivial and doesn't benefit from scale-out, but the communication overhead grows linearly with the cluster size. For a sufficiently trivial query workload, the throughput of the cluster will be the same at 100 nodes as at 10. Even worse, queries will block until all nodes return, meaning that performance will actually decrease as the cluster grows. On the other hand, queries that are sufficiently compute-intensive will scale linearly and show constant performance. Most queries fall somewhere in-between. It's all a question of the compute-communicate ratio.
Note: Using a KeyAssociatedFilter to wrap (enclose) other filters will instruct the distributed cache to apply the wrapped filter only to the entries stored at the cache service node that owns the specified host key.
- Check that all queried attributes are indexed – using a profiling filter.
- Always use putAll() and not put() to put objects into a cache (using Collections.singletonMap() if necessary) as put() returns the old value across the network.
- Consider using some of the Coherence design patterns or common code. This has been reviewed, tested and tuned, so may improve the performance of your own code.
- Using 'lite' events, where the old value of the object is not returned will reduce the amount of data being sent across the network.
- Accessing the value of indexes, rather than the object itself in an entry processor, can save the overhead of de-serialization the whole object, e.g.
```java
import com.tangosol.util.processor.AbstractProcessor;
import com.tangosol.util.InvocableMap;
import com.tangosol.util.extractor.ReflectionExtractor;
/**
* @author jhall *
*/
public class GetAgeProcessor extends AbstractProcessor {
private boolean liteTouchGetAge;
public GetAgeProcessor(boolean liteTouchGetAge) {
this.liteTouchGetAge = liteTouchGetAge;
}
public Object process(InvocableMap.Entry entry) {
Integer age = null;
if (entry.isPresent()) {
if (liteTouchGetAge) {
age = (Integer) entry.extract(new ReflectionExtractor("getAge"));
}
}
}
}
} else {
// Normal mechanism to get the age
Person person = (Person)
entry.getValue();
age = person.getAge();
}
return age;
• Storing application data in the binary format it understands, so removing
data transformations (if the application is going to be the only consumer
of the data).
• Keeping cache objects in a dual serialized/de-serialized format, with a
custom BackngMap and Filters, to enable very fast object aggregation – as
the custom Filters access the un-serialized data.
• If clients are connected to the cluster (via a proxy) over a relatively slow
connection, say 100 Mbit, then performance may be improved by using
the new compression filter, to reduce the size of data retrieved by the
client. However, in most cases the CPU overhead of the
CompressionFilter will outweigh the bandwidth savings, so do not use
it indiscriminately.
• When complex filters are used, try and merge many EqualsFilter’s into
an AllFilter and many OrFilter’s into an AnyFilter, to reduce the
number of filters.
• Finally isolate the optimized settings for a specific cache using a separate
service configuration in the cache configuration file. This will allow each
cache to be tuned separately.
For more information on performance tuning see the Performance Tuning guide.
Support Tools
Management and Monitoring
Although the choice and testing of operational support tools is sometimes
restricted to production environments, it’s very important to ensure that the
management and monitoring tools that will be used in production are also tested as
part of the test cycle. Ensuring that these overheads are included in the test metrics
means it should be possible to replicate them in a production environment.
A number of management and monitoring tools can be used to start and stop
Coherence. These include:
• WebLogic Operations Control – to start and stop Coherence cache nodes.
• Shell Scripts.
Windows Services.
Tanuki – Java Service tool.
A combination of the above can also be used together along with other tools not listed. For monitoring Coherence the following can be used:
WebLogic Operations Control – for dynamically and automatically managing SLA’s and QoS. This can include starting new cache nodes when the existing cache nodes reach a memory or CPU usage threshold.
JRockit Mission Control – JVM monitoring tool for low-level JVM performance analysis, even in production environments (its overhead is only 2-3%).
Oracle Application Diagnostics For Java (AD4J) – provides production JVM diagnostics information for a range of JVM’s (again imposing only a 2-3% overhead). In addition it can also provide database transaction information, like the SQL being invoked.
JConsole – for Coherence JMX information.
JMX Reporter – feature of Coherence 3.4 that allows Coherence runtime diagnostics information to be output to reports that can be customized for external consumption.
This is not an exclusive list and other 3rd party tools can also be used to manage a Coherence cluster. A key factor in choosing which management and monitoring tools are best-suited to your environment will be the type of information you want to capture and how you want to be notified. Many management and monitoring tools, like Oracle Enterprise Manager, IBM Tivoli and BMC Patrol, can act as a JMX container to allow Coherence JMX information to be displayed and used, or can receive and report SNMP traps generated by tools like WebLogic Operations Control.
Sizing
When trying to determine how much hardware you need for your Coherence cache a good starting point is to determine how much data you need to cache. Once this has been calculated you can then determine how many JVM’s, what physical memory, CPU’s and servers you need. These guidelines will allow you to plan your initial hardware requirements for testing. An accurate view of your hardware requirements can only be validated through specific tests that take into account your application load and use cases that simulate expected users volumes, transactions profiles, processing operations, etc.
How do you determine how big your cache needs to be?
To size your cluster a number of factors need to be taken into account. As a general rule you need to allocate at least 3x the physical heap size as the data set size –
assuming that you are going to keep 1 backup copy of primary data. More accurately the size of a cache can be calculated as follows:
\[
\text{Entry Size} = \text{Serialized form of the key} + \text{Serialized form of the Value} + 150 \text{ bytes}
\]
Hence:
\[
\text{Cache Capacity} = \text{Number of entries} \times 2 \times \text{Entry Size}
\]
For instance if a cache contains 5M objects, where the value and key serialized are 100 bytes and 2k, respectively:
\[
\text{Entry Size} = 100 \text{ bytes} + 2048 \text{ bytes} + 150 \text{ bytes} = 2298 \text{ bytes}
\]
Hence:
\[
\text{Cache Capacity} = 5M \times 2 \times 2298 \text{ bytes} = 21,915 \text{ MB}
\]
An estimate of the JVM heap space your cache data required can be verified by using the heap monitor in JConsole as well as by adding JMX monitoring to Coherence, so cache statistics can also be viewed. These cache statistics can be accessed in the following way:
1. To measure the memory requirements for your production installation start a cache server and load a portion (for instance 10%) of the largest number of objects you expect the cache to hold – this may need to be simulated if you do not have the data available.
2. Start JConsole and select the node Coherence > Cache > (Cache Service Name) > (Cache Name) > 1 > back. The last attribute Units will contain the size in bytes of the objects in the cache if you are using the BINARY unit calculator or the number of objects you are using the FIXED (default) unit calculator. If you are only loading 10% of the highest number of cache objects then you will need to multiply this value by 10 to get the total memory requirements for the cache objects, and then by the object size if the FIXED unit calculator is being used.
If you are indexing any object attributes then you will also need to take into account the size of these. Un-ordered cache indexes consist of the serialized attribute value and the key. Ordered indexes include additional forward and backward navigation information.
Indexes are stored in memory – though they will not be counted in the cache size statistics found using JConsole. Each node will require 2 additional maps (instances of java.util.HashMap) for an index - one for a reverse index and one for a forward index. The reverse index size is a cardinal number for the value (size of the value domain, i.e. number of distinct values). The forward index size is of the key set size.
The extra memory cost for the HashMap is about 30 bytes. Extra cost for each extracted indexed value is 12 bytes (the object reference size) plus the size for the
value itself. For example, the extra size for Long value is 20 bytes (12 bytes + 8 bytes) and for a String is 12 bytes + the string length. There is also an additional reference (12 bytes) cost for indexes with a large cardinal number and a small additional cost (about 4 bytes) for sorted indexes.
With this in mind you can calculate an approximate index cost. For an indexed Long value of large cardinal it's going to be about:
\[
\text{Index size} = \text{forward index map} + \text{backward index map} + \text{reference} + \text{value size}
\]
Hence:
\[80 \text{ bytes} = 30 \text{ bytes} + 30 \text{ bytes} + 12 \text{ bytes} + 8 \text{ bytes}\]
For an indexed String of an average length of 20 chars it's going to be about:
\[112 \text{ bytes} = 30 \text{ bytes} + 30 \text{ bytes} + 12 \text{ bytes} + (20 \text{ bytes} \times 2)\]
To summarize, the index cost is relatively high for small objects, but it's constant and becomes less and less expensive for larger objects.
Sizing a cache is not an exact science, as you can see. Assumptions on the size and maximum number of objects have to be made. So as an example for a Trades cache:
- Estimated average size of cache objects = 1k
- Estimated maximum number of cache objects 100k
- 5 string indexes, 5 * 112 * 100k = 56MB
- Approx cache size = 1k * 100k + 56MB = ~156MB
Because each node needs to hold backup as well as primary data ~312MB of data will actually need to be stored.
Each JVM will store on-heap data itself and require some free space to process data in. With a 1GB heap this will be approximately 300MB or more. The JVM process address space for the JVM – outside of the heap is also approximately 200MB.
Hence to store 312MB of data will require the following memory for each node in a 2 node JVM cluster:
\[312 \text{ MB (for data)} + 300 \text{ MB (working JVM heap)} + 200 \text{ MB (JVM executable)} = 812 \text{ MB (of physical memory)}\]
Note that this is the minimum heap space that is required. It is prudent to add additional space, to take account of any inaccuracies in your estimates, say 10%, and for growth (if this is anticipated).
There are a number of techniques to reduce the size of a cache. One mentioned already is to not use backups. If this is not an option, because the cache data is being updated, then an alternative approach could be to backup up the changes, but only while they are being persisted to some external storage – where they can easily be retrieved. This can be accomplished by specifying the `backup-count-after-writebehind` element in the cache configuration. Another is to `intern()` Java Strings. This may reduce the memory your data takes up if you have a significant amount of repeating String data, e.g. currency codes, but there will be a small overhead in removing the redundancy. POF as a serialization mechanism will also significantly reduce the size of your cached objects, see above for more details. Finally, if data volumes increase, additional capacity can grow dynamically by adding additional servers with no service interruption.
In summary the memory requirements for a cache are:
\[
\text{Cache Memory Requirement} = ( (\text{Size of cache entries} + \text{Size of indexes}) \times 2 \text{ (for backups and primary’s) } + \text{JVM working memory (\sim 30\% of 1GB JVM)}
\]
**How do you limit the size of your cache?**
A cache can be size limited so that it never exceeds a specified number of objects or size. This allows data to be cached even when the available hardware cannot accommodate a full data set. When the cache limit is reached an eviction policy is invoked to remove old data to make way for new data. The eviction policy used can be based upon a ‘Least Recently Used’ (LRU), some other in-built policy or a custom one.
A cache with a size limit and eviction policy can read objects not in a cache ‘on-demand’ by configuring the cache to read-through the missing data from the underlying cache store, like a database or file system. The Coherence read-through mechanism will only automatically load missing objects when a key based lookup is made. It will not load missing objects when a query is performed and some objects that would match the query are not present in the cache – Coherence cannot translate object queries into SQL or file based search queries, etc. Therefore, it does not make much sense to use a size limited cache with an eviction/expiry policy when queries need to be performed against the cache, as the queries will not necessarily return the correct results.
A size limit and cache eviction policy for a cache is set by modifying the cache configuration file. Below is an example of how to size limit the number of entries a particular cache node will store:
```xml
...<unit-calculator>BINARY</unit-calculator>
<!-- 100000000 = ~100 MB -->
```
Note: A UnitCalculator implementation gives the size of a cache node based upon the number of entries. This is the default calculator. If the BINARY mode is specified the BinaryMemoryCalculator is used and the physical memory (in bytes) of a nodes entries is returned, e.g. to a JMX bean. However, the BinaryMemoryCalculator implementation can only determine an accurate entry size if both the entry key and value are Binary objects; otherwise, an exception will be thrown during the unit calculation. The size limit is specified in the Coherence configuration file for the high-units for a node. In Coherence 3.4 the binary calculator is now the default.
These settings restrict the total amount of data that can be held in memory within this cache server process i.e. ~100MB (this size is in bytes). By setting cache size limits, out of memory errors can be avoided. The default eviction policy is a hybrid eviction policy that chooses which entries to evict based the combination (weighted score) of how often and recently they were accessed, evicting those that are accessed least frequently and were not accessed for the longest period first.
How many JVM’s?
Generally speaking its better to use lots of JVM’s with a small heap, say 512MB - 1GB, than a fewer number of JVM’s with a larger heap. If the heap size is greater than 1GB then you will generally need to consider tuning the Garbage Collection (the process of recovering unused memory) settings to prevent prolonged GC times from a full GC. Running with a small heap (and setting the min and max values to the same value) means that GC times will be short and the overall performance of the data grid more consistent and responsive. Furthermore, with lots of smaller JVM’s any node failure in the cluster will also have less impact on the overall cluster as less data will have to be re-balanced.
The overhead of incremental heap expansion can be eliminated by explicitly setting the minimum (-Xms) and maximum (~Xmx) JVM heap size to the same value at startup. This value used should also be used for all cache nodes in the cluster, so that all cluster members have the same JVM heap size.
The optimum memory to CPU ratio (MEM:CPU) will depend on the type of processing carried out in data grid. For simple data operations, gets and puts, then a memory to CPU ratio of 8:1 may be sufficient, e.g. 2 CPU, dual core server with 32GB of memory. However, for more CPU intensive operations, like parallel aggregations, queries, extensive use of entry processors, etc., a higher ratio of perhaps 4:1 may be required, e.g. 2 CPU, dual core server with 16GB of memory. The overhead of the operating system on a server should also be considered when determining the number of JVM cluster nodes to start, as does the management overhead. As a guide, a 2 CPU dual core server with 16GB of memory should be able to comfortably run approx. 8 to 10 JVM’s, each with 1GB heaps, leaving the OS with reasonable ‘head-room’. To determine the maximum number you can use top on Unix or the System Monitor on Windows. Although these guidelines
provide a good starting point for hardware selection, the optimum configuration should be validated through testing.
To maintain a highly available cluster, sufficient cache servers should be deployed so that if one fails the surviving \( n \) members will be able to hold the same amount of data that was previously stored by \( n + 1 \) cache nodes. So to protect against process failure, if \( n \) cache nodes are required to hold all the data \( n + 1 \) nodes must be deployed. To protect against server failure \( s + 1 \) servers must be deployed, where \( s \) is the number of servers required to hold all the data. To summarize:
\[
\begin{align*}
&n + 1 \text{ JVM's required to store all the data and protect against process failure} \\
&s + 1 \text{ servers required to run } n + 1 \text{ JVM's and protect against server failure} \\
&\text{Hence: } \frac{n + 1}{s} \text{ JVM's per server}
\end{align*}
\]
Generally you need to provide more contingency than 1 extra JVM or even 1 additional server, so that if a failure occurs you are not vulnerable if a further failure occurs.
**Configuration Management**
Establishing a common configuration file, or set of files, that is accessed centrally has been found to be the simplest way to provide a consistent cluster configuration. However, if a client application is starting up and shutting down very quickly – because for instance client libraries being used have memory leakage issues – then it is sometimes more performant to copy the client configuration files to a local location.
Add JMX monitoring settings to the Coherence JVM's and the `-verbosegc` setting to monitor GC's. This will enable any problems to be more easily diagnosed, the `-server` option to improve the JVM's performance and fixed minimum and maximum heap sizes to pre-allocate memory at start-up. For example:
```
-Dcom.sun.management.jmxremote
-Dtangosol.coherence.management.remote=true
-Dtangosol.coherence.management=all
-Dcom.sun.management.jmxremote.port=<JMX_port>
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
And
-verbosegc
-server
-Xms1024M
-Xmx1024M
```
Also ensure that logging is configured correctly and that any output will be sent to the right place, e.g. a log file – including JVM output. A Coherence override file
may be a more appropriate place to capture configuration settings, like logging parameters, rather than passing them as command line arguments.
Coherence does not require license keys to deploy or run it. However, there are a number of different versions that are licensed differently. To ensure that you are using only the features for the version you have licensed (the default is Grid Edition), you can set the version in the `tangosol-coherence-override.xml` file as shown below.
```xml
<!-- Specify license and mode to use -->
<license-config>
<edition-name system-property="tangosol.coherence.edition">EE</edition-name>
<!-- This value actually has to be overridden as a system property -->
<license-mode system-property="tangosol.coherence.mode">dev</license-mode>
</license-config>
```
**Security**
Security is key for many Coherence deployments. Coherence comes with a variety of security features that enable access to cluster nodes to be secured, actions within a cache to be restricted (based upon the identity of a client) and communications traffic to be encrypted. Further information about how to incorporate all these features into your deployment can be found in the [online documentation](#).
**Change Control**
For relatively minor changes to the configuration of a cluster, such as adding a cache, can be done in a rolling fashion across a cluster. More drastic changes may require a cache restart. For instance, a minor patch can probably be applied in a rolling fashion but a major release upgrade will require considerable planning, which is outside the scope of this document.
**How do I change my configuration on the fly?**
By configuring a separate service for each cache it is possible to start, stop and re-configure caches independently from each other.
**How do I change my cache objects in a production system?**
If the format of an object is being changed in a production system, the cache that holds the object will need to be completely re-started, unless the change is backwardly compatible with the previous object format. Some customers have successfully tried online upgrades in their test environments but most re-start the cache with the new objects.
To perform an online update of an object's structure, it has to be backwardly compatible and also written to cope with different versions being used in the same cache.
To accomplish this, each class has a version number and a binary array attribute to store new attribute data. Where an old and new version of a class exist and are deployed to the client and server platforms by rolling re-starts, the following scenarios can occur:
- **Old client accessing old object and new client accessing new object**
In each of these cases, each will be able to access all the attributes available on the client-side object class.
- **Old client reading newer object**
If an old client accesses a new object, where the new version of the object has additional attributes, then when a client based on the older version of the object requests objects based upon the new format, the excess attribute values are stored in the binary store when it is de-serialized. When it is written back to the cache or saved, the binary store data is written back, or serialized, after the old attributes. This means they are not lost.
- **New client accessing old object**
If a new client accesses an old object, the old object will be converted into a new object and the fields where there are no values to de-serialize will be given default values.
While versioning cache objects in this manner requires a small amount of extra work to make them upgradeable in a safe manner. For more information about versionable objects, see the Coherence Java interface `EvolvablePortableObject` and Coherence .NET interface `IEvolvablePortableObject`.
If the new object is not compatible with the previous version, then the upgrade path is more complicated and will depend on the system's architecture and other factors. There are a number of different approaches. One is to clear the old cache (which can be done from JConsole), the new classes added to the cluster nodes CLASSPATH's, and each node restarted in a rolling fashion. Alternatively, a new cache can be set up in parallel and the objects gradually migrated from the old cache to the new cache and the values of the object converted in the replication. A rolling upgrade of the nodes CLASSPATH would be required and at some point a client ‘black-out’, to ensure all data had been migrated. During this client ‘black-out’ period the clients CLASSPATH could also be updated with the new classes and their configuration changed to point at the new cache.
**How do I keep my .NET, C++ and Java code synchronized?**
When developing applications that have .NET or C++ clients, a corresponding Java object will be required, for example if queries and entry processors are going to be
run against the data stored in the cache. At the moment this requires manual coding of both the client and the server objects in their corresponding languages.
TESTING
Testing Strategies
Run the basic tests tools provided by Coherence first, i.e. the datagram test, the multicast test and perhaps the access test. These will very quickly give you an idea of the practical limits of your test environment. For instance the datagram test will tell you how much data can be sent across your network infrastructure. So for Gbit Ethernet a maximum of 100 Mbit p/s (~10MB) can usually be transmitted. If each object is on average 10k then the maximum number of get requests will be ~10k.
These kinds of tests will tell you what is theoretically possible with your test environment before running any of your application test scenarios.
What to test?
Most systems where Coherence is being introduced already work – just not well enough or not for much longer. So Coherence is being utilized to:
- Make a system more scalable
- Make it more resilient
- Improve performance, e.g. lower latency
- Alleviate load on an existing resource through caching
Since testing is an open-ended activity, care needs to be taken to focus on those areas where Coherence is going to bring the most value.
What to measure?
The short answer here is everything. So monitor the network (which will often be a bottleneck), the OS (CPU, memory, etc.), the JVM (GC's), information/statistics from Coherence (through JMX) and test harness (number of users, messages, etc.). This will provide a picture of the cause of any bottleneck or limiting factor and allow tuning and trouble shooting to take place.
Before testing also decide on targets, SLA's, etc. that the production system will need to meet. Once they have been met, e.g. a particular response time, then focus on another system characteristic, for instance number of users.
How to test?
Try and use as representative a system as possible, making the network and server hardware as close as possible to the production environment.
Choose the difficult tests first, i.e. the hardest scenarios you expect Coherence to have to handle. Then test the system until it breaks, either because there is insufficient network bandwidth, processing capacity, cache storage, etc.
Example Proof of Concept (PoC) Tests
Measuring Latency and Throughput
When evaluating performance you try to establish two things, latency and throughput. A simple performance analysis test may simply try performing a series of timed cache accesses in a tight loop. While these tests may accurately measure latency, in order to measure maximum throughput on a distributed cache a test must make use of multiple threads concurrently accessing the cache, and potentially multiple test clients. In a single threaded test the client thread will naturally spend the majority of the time simply waiting on the network. By running multiple clients/threads, you can more efficiently make use of your available processing power by issuing a number of requests in parallel. The use of batching operations can also be used to increase the data density of each operation. As you add threads you should see that the throughput continues to increase until you've maxed out the CPU or network, while the overall latency remains constant for the same period.
Scalability
To show true linear scalability as you increase the cluster size, you need to be prepared to add hardware, and not simply JVM’s to the cluster. Adding JVM's to a single machine will scale only up to the point where the CPU or network is fully utilized.
Plan on testing with clusters with more then just two cache servers (storage enabled nodes). The jump from one to two cache servers will not show the same scalability as from two to four. The reason for this is because by default Coherence will maintain one backup copy of each piece of data written into the cache. The process of maintaining backups only begins once there are two storage-enabled nodes in the cluster (there must have a place to put the backup). Thus when you move from one to two, the amount of work involved in a mutating operation such as cache put() actually doubles, but beyond that the amount of work stays fixed, and will be evenly distributed across the nodes.
Data Reliability
Ensuring that data remains accurate and available even in the event of server failure is a fundamental requirement of a distributed caching technology. Data stored in the distributed caching system must be equally resilient to individual JVM failures as well as server failures (i.e. NIC failure, Ethernet cable removal, power supply failure, etc.). Further, all data access requests (i.e. access/update) must complete, all transactions must complete and the system must simultaneously load balance back to a steady state of primary and back-ups across the distributed environment.
Destructive Testing
In order to understand what a system is capable of, it is equally important to understand when it will break. If the system breaks, that failure should be
predictable in nature and easily identifiable. Determining the breaking point should be done by:
- Overloading the distributed caching technology with a larger data set than the memory that is available to see how it reacts, compensates, etc.
- Taking a large (in terms of number of objects) distributed cache and steadily increase the number of clients continuously accessing the data to see (1) what performance outliers exist and (2) if and when the system fails.
- Performing a long running test (days) in which a number of clients are accessing the data stored in a large (in terms of number of objects) distributed cache and capturing the performance characteristics to determine what performance outliers exist.
DEPLOYING TO PRODUCTION
Transitioning from Test to Production
Ensure that you have followed the Production Checklist and other key documents (mentioned above). Additionally, the following have been found to ease the transition:
- Establish a standard configuration files and override files (used for common settings)
- Create a build checklist, or server image – to ensure all servers are identical, which will ease with trouble shooting
If you are using Extend clients consider using separate JVM’s for the proxy servers. This is because the JVM GC parameters may need to be optimized differently as they will be creating and destroying objects at a different rate to standard cache cluster members. This will also enable the number of proxy servers to be varied independently of the cache servers.
Roadmap
An example transition roadmap for moving from a PoC to production could look something like this:
1. **Determine approximate Data Grid size** based upon application sizing tests. This can be done in a development environment providing that the production OS, etc. is the same, e.g. development OS is 32 Bit and production OS is 32 Bit.
2. Use the Production Checklist and Performance Guide to help with your choice of hardware.
3. **Create a representative test environment** – based on production hardware, network configuration, OS, etc. This should be as close as possible, perhaps representing a subset of the production environment.
4. Follow the Production Checklist and tune OS, network configurations, etc. as per Performance Tuning Guide.
5. **Perform data-gram and multi-cast tests** (assuming multi-cast is going to be permitted in the production environment, otherwise setup Well Known Addresses). This should stress your network to determine the physical limits for moving data, etc. Before you perform the data-gram test ensure that the network segment is isolated from rest of organizations network – it will generate a lot of traffic.
6. **Configure the data grid for production** - e.g. set mode to ‘prod’, add JMX parameters, etc. and place configuration files in a central location, e.g. a shared drive or Web Server (URL’s can be used to reference the cache configuration file).
7. **Deploy your application using chosen production management tools**, starting/stopping the data grid (e.g. via WebLogic Operational Control, scripts) to check that it works correctly. Also install any monitoring tools, like WebLogic Operational Control, and ensure that the required operational management information is being captured correctly.
8. **Re-check sizing calculations** in test environment.
9. Once the test environment has been setup as per production then **perform targeted tests on test platform**. See the Test Strategies section above for more details.
10. **Iteratively tune and re-test the application** until you are getting the required level of performance and have exhausted all available options – in terms of Coherence, network, OS, JVM tuning, etc.
11. **Replicate and scale-up the test environment in the production environment.**
12. **Re-test** to ensure sufficient capacity and comparative performance. Check monitoring and management tools are functioning correctly and then deploy.
CONCLUSION
It is very important to thoroughly read the guideline documents for Coherence in conjunction with this document, to gain an overall picture of the steps required to move a Coherence environment from a PoC to production. Ultimately every application uses Coherence in a slightly different manner but the overall principles outlined above should hold true. Planning and targeted testing in a representative environment should minimize the risks involved in a new Coherence deployment.
|
{"Source-Url": "http://www.oracle.com/technetwork/middleware/coherence/oracle-coherence-planning-wp-1-133787.pdf", "len_cl100k_base": 10241, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 47139, "total-output-tokens": 11364, "length": "2e13", "weborganizer": {"__label__adult": 0.00024628639221191406, "__label__art_design": 0.0002894401550292969, "__label__crime_law": 0.00016987323760986328, "__label__education_jobs": 0.0006127357482910156, "__label__entertainment": 4.589557647705078e-05, "__label__fashion_beauty": 0.00011259317398071288, "__label__finance_business": 0.0003895759582519531, "__label__food_dining": 0.00021588802337646484, "__label__games": 0.0004680156707763672, "__label__hardware": 0.0011882781982421875, "__label__health": 0.00020682811737060547, "__label__history": 0.00019812583923339844, "__label__home_hobbies": 9.375810623168944e-05, "__label__industrial": 0.0004398822784423828, "__label__literature": 0.0001920461654663086, "__label__politics": 0.00013077259063720703, "__label__religion": 0.00034117698669433594, "__label__science_tech": 0.016204833984375, "__label__social_life": 5.9604644775390625e-05, "__label__software": 0.01140594482421875, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.0001919269561767578, "__label__transportation": 0.00028228759765625, "__label__travel": 0.00017440319061279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50041, 0.01303]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50041, 0.39428]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50041, 0.88088]], "google_gemma-3-12b-it_contains_pii": [[0, 93, false], [93, 3507, null], [3507, 5580, null], [5580, 7903, null], [7903, 10524, null], [10524, 12985, null], [12985, 15996, null], [15996, 18381, null], [18381, 20310, null], [20310, 22701, null], [22701, 25312, null], [25312, 27245, null], [27245, 30155, null], [30155, 33248, null], [33248, 35575, null], [35575, 37784, null], [37784, 40500, null], [40500, 42804, null], [42804, 45582, null], [45582, 47546, null], [47546, 49546, null], [49546, 50041, null], [50041, 50041, null]], "google_gemma-3-12b-it_is_public_document": [[0, 93, true], [93, 3507, null], [3507, 5580, null], [5580, 7903, null], [7903, 10524, null], [10524, 12985, null], [12985, 15996, null], [15996, 18381, null], [18381, 20310, null], [20310, 22701, null], [22701, 25312, null], [25312, 27245, null], [27245, 30155, null], [30155, 33248, null], [33248, 35575, null], [35575, 37784, null], [37784, 40500, null], [40500, 42804, null], [42804, 45582, null], [45582, 47546, null], [47546, 49546, null], [49546, 50041, null], [50041, 50041, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50041, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50041, null]], "pdf_page_numbers": [[0, 93, 1], [93, 3507, 2], [3507, 5580, 3], [5580, 7903, 4], [7903, 10524, 5], [10524, 12985, 6], [12985, 15996, 7], [15996, 18381, 8], [18381, 20310, 9], [20310, 22701, 10], [22701, 25312, 11], [25312, 27245, 12], [27245, 30155, 13], [30155, 33248, 14], [33248, 35575, 15], [35575, 37784, 16], [37784, 40500, 17], [40500, 42804, 18], [42804, 45582, 19], [45582, 47546, 20], [47546, 49546, 21], [49546, 50041, 22], [50041, 50041, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50041, 0.01714]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
6d87328cc862bbc6e29b5e8403097e4254a50ede
|
[REMOVED]
|
{"len_cl100k_base": 8404, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36361, "total-output-tokens": 10685, "length": "2e13", "weborganizer": {"__label__adult": 0.0004425048828125, "__label__art_design": 0.00040793418884277344, "__label__crime_law": 0.0003843307495117187, "__label__education_jobs": 0.0008940696716308594, "__label__entertainment": 9.322166442871094e-05, "__label__fashion_beauty": 0.00020170211791992188, "__label__finance_business": 0.00028228759765625, "__label__food_dining": 0.00035953521728515625, "__label__games": 0.0007724761962890625, "__label__hardware": 0.0012979507446289062, "__label__health": 0.0006537437438964844, "__label__history": 0.00026345252990722656, "__label__home_hobbies": 0.00012981891632080078, "__label__industrial": 0.00044918060302734375, "__label__literature": 0.0003032684326171875, "__label__politics": 0.00022232532501220703, "__label__religion": 0.00040602684020996094, "__label__science_tech": 0.056121826171875, "__label__social_life": 0.00010794401168823242, "__label__software": 0.007701873779296875, "__label__software_dev": 0.92724609375, "__label__sports_fitness": 0.00034427642822265625, "__label__transportation": 0.0005397796630859375, "__label__travel": 0.00020122528076171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40918, 0.04739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40918, 0.55119]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40918, 0.8809]], "google_gemma-3-12b-it_contains_pii": [[0, 6699, false], [6699, 10004, null], [10004, 12740, null], [12740, 16899, null], [16899, 23262, null], [23262, 24732, null], [24732, 26257, null], [26257, 30933, null], [30933, 33014, null], [33014, 40918, null], [40918, 40918, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6699, true], [6699, 10004, null], [10004, 12740, null], [12740, 16899, null], [16899, 23262, null], [23262, 24732, null], [24732, 26257, null], [26257, 30933, null], [30933, 33014, null], [33014, 40918, null], [40918, 40918, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40918, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40918, null]], "pdf_page_numbers": [[0, 6699, 1], [6699, 10004, 2], [10004, 12740, 3], [12740, 16899, 4], [16899, 23262, 5], [23262, 24732, 6], [24732, 26257, 7], [26257, 30933, 8], [30933, 33014, 9], [33014, 40918, 10], [40918, 40918, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40918, 0.12371]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a82e1435c97b9a6646003cbaa93021a59d127137
|
[REMOVED]
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/239958957/Conf2020.pdf", "len_cl100k_base": 13332, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 67569, "total-output-tokens": 16843, "length": "2e13", "weborganizer": {"__label__adult": 0.0005555152893066406, "__label__art_design": 0.0005617141723632812, "__label__crime_law": 0.0007772445678710938, "__label__education_jobs": 0.0009136199951171876, "__label__entertainment": 0.00010836124420166016, "__label__fashion_beauty": 0.0002453327178955078, "__label__finance_business": 0.002559661865234375, "__label__food_dining": 0.0005517005920410156, "__label__games": 0.0007781982421875, "__label__hardware": 0.0009775161743164062, "__label__health": 0.0009627342224121094, "__label__history": 0.00035643577575683594, "__label__home_hobbies": 0.00016832351684570312, "__label__industrial": 0.0008683204650878906, "__label__literature": 0.00048160552978515625, "__label__politics": 0.0005917549133300781, "__label__religion": 0.0005383491516113281, "__label__science_tech": 0.07012939453125, "__label__social_life": 0.00012111663818359376, "__label__software": 0.00688934326171875, "__label__software_dev": 0.9091796875, "__label__sports_fitness": 0.00033926963806152344, "__label__transportation": 0.0009107589721679688, "__label__travel": 0.000255584716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59190, 0.02248]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59190, 0.39898]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59190, 0.8295]], "google_gemma-3-12b-it_contains_pii": [[0, 454, false], [454, 3489, null], [3489, 6636, null], [6636, 10138, null], [10138, 12921, null], [12921, 16408, null], [16408, 20082, null], [20082, 23866, null], [23866, 27335, null], [27335, 30310, null], [30310, 33530, null], [33530, 36710, null], [36710, 39514, null], [39514, 41940, null], [41940, 44793, null], [44793, 48520, null], [48520, 52476, null], [52476, 55810, null], [55810, 59190, null]], "google_gemma-3-12b-it_is_public_document": [[0, 454, true], [454, 3489, null], [3489, 6636, null], [6636, 10138, null], [10138, 12921, null], [12921, 16408, null], [16408, 20082, null], [20082, 23866, null], [23866, 27335, null], [27335, 30310, null], [30310, 33530, null], [33530, 36710, null], [36710, 39514, null], [39514, 41940, null], [41940, 44793, null], [44793, 48520, null], [48520, 52476, null], [52476, 55810, null], [55810, 59190, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59190, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59190, null]], "pdf_page_numbers": [[0, 454, 1], [454, 3489, 2], [3489, 6636, 3], [6636, 10138, 4], [10138, 12921, 5], [12921, 16408, 6], [16408, 20082, 7], [20082, 23866, 8], [23866, 27335, 9], [27335, 30310, 10], [30310, 33530, 11], [33530, 36710, 12], [36710, 39514, 13], [39514, 41940, 14], [41940, 44793, 15], [44793, 48520, 16], [48520, 52476, 17], [52476, 55810, 18], [55810, 59190, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59190, 0.00478]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
928e8e8fb16568cfb09b334a9b9c91f8a9669c95
|
Link IDE : A Real Time Collaborative Development Environment
Kevin Grant
San Jose State University
Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects
Part of the Computer Sciences Commons
Recommended Citation
DOI: https://doi.org/10.31979/etd.rqpj-pj3k
https://scholarworks.sjsu.edu/etd_projects/227
This Master's Project is brought to you for free and open access by the Master's Theses and Graduate Research at SJSU ScholarWorks. It has been accepted for inclusion in Master's Projects by an authorized administrator of SJSU ScholarWorks. For more information, please contact scholarworks@sjsu.edu.
Link IDE : A Real Time Collaborative Development Environment
A Project Report
Presented to
The Faculty of the Department of Computer Science
San José State University
In Partial Fulfillment
of the Requirements for the Degree
Master of Science in Computer Science
by
Kevin Grant
May 2012
SAN JOSE STATE UNIVERSITY
The Undersigned Project Committee Approves the Project Titled
Link : A Real Time Collaborative Development Environment
by
Kevin Grant
APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE
SAN JOSÉ STATE UNIVERSITY
May 2012
--------------------------------------------------------------------------------------------------------------------------
Dr. Soon Tee Teoh, Department of Computer Science
Date
--------------------------------------------------------------------------------------------------------------------------
Dr. Chris Pollett, Department of Computer Science
Date
--------------------------------------------------------------------------------------------------------------------------
Prof. Frank Butt, Department of Computer Science
Date
APPROVED FOR THE UNIVERSITY
--------------------------------------------------------------------------------------------------------------------------
Associate Dean Office of Graduate Studies and Research
Date
ABSTRACT
Link: A Real Time Collaborative Development Environment
by Kevin Grant
While working on large scale development projects all software engineers find themselves, at some point, working with a source control system in order to add or revert changes to code. With software projects involving a multitude of programmers this is a crucial part of successful development. When working on a smaller project however, with a tight knit group, setting up and dealing with such a system can become more work than it is worth. To solve this problem a real time collaborative integrated development environment could be used. The IDE’s focus would be on providing a collaborative setting for programming teams or pair programming by taking advantage of real time text editing, the ability to build and run code, chat, and various other team and task oriented features. Instead of running into code conflicts at check-in time, users would be able to see conflicts appearing in real-time. This would allow small programming teams to bypass source control, avoid wasting time, and spend more time collaborating. Real time text editing has recently become popular with its appearance in Google Docs. There are a number of open source applications that support real time text editing. Real time editing by multiple users allows not only for excellent collaborative programming but can also be very effective in teaching sessions of programming. Other features such as chatting and task lists would also help to create a fully immersive and organized collaborative environment where users do not need outside tools in order to collaborate.
ACKNOWLEDGEMENTS
I would just like to say thank you to all the people who have helped me with my project. Thank you very much Dr. Soon Tee Teoh. You have been of tremendous help and guidance the past months. I would like to thank Dr. Chris Pollett and Frank Butt for agreeing to lend their time and be a part of my committee. I would also like to thank my friends and family for the encouragement. Thanks to all the random people on the internet who have offered wisdom. I would like to thank the three users who helped me test run this program. God knows they spent a lot of time dealing with the bugs that so inevitably lurk in a program such as this.
# Table of Contents
1. Introduction 8
1.1. Introduction 8
1.2. Justification 8
2. Related Work 11
2.1. Browser Based 11
2.1.1. eXo Cloud IDE 11
2.1.2. Cloud9 IDE 12
2.1.3. Comparison to Link IDE 12
2.2. Eclipse Communication Framework Project 13
3. Research Topics 14
3.1. Real Time Collaborative Editing 14
3.1.1. Operational Transform 14
3.1.2. Differential Synchronization 19
3.1.3. Cursor Preservation 20
3.1.3.1. Absolute Referencing 20
3.1.3.2. Context Matching 21
4. Design and Implementation 22
4.1. GUI Decisions 22
4.1.1. Framework choice 24
4.1.2. Docking Panels 24
4.1.3. Code Editor 26
4.2. Persistence 27
4.2.1. SQL Database 27
4.2.2. FTP Server 29
4.2.3. Storing local data 29
4.3. Compilation / Building Code
4.4. Collaborative implementations
4.4.1. Project/User Authentication
4.4.2. Task List
4.4.3. Chatting
4.4.4. Real-Time Collaborative Editing
4.4.4.1. Difference and Patching Algorithm
4.4.4.2. Differential Synchronization
4.4.4.3. Cursor Preservation
5. Usability Testing
5.1. Test one – Project Creation, Resume project, Build Project
5.2. Test two – Synchronous editing
5.3. Test three – Communication, Task Lists, Overview
5.4. Final Test
6. Future Implementations
6.1. Synchronous Locks
6.2. Concurrent Performance
6.3. Multiple Visible Cursors
6.4. Fully Web Hosted Servers
6.5. Fully Designed GUI – Custom Windows / Controls
7. Conclusion
8. References
8. Figures and Code Example Page Reference
1. Introduction
1.1 Introduction
Distributed computing involves multiple computer systems communicating with each other through a network to achieve a common goal. As distributed computing, also known as cloud computing, becomes more popular more applications are moving towards accessible and flexible network based designs. As applications begin to take advantage of these designs, systems are beginning to cater to the user, offering on demand access to data as well as instant communication with other users. One of the advantages of these types of systems is the ease at which collaboration can occur between users. Developing your application with a cloud based design can eliminate factors limiting collaboration such as distance or time zone [2]. Applications, such as Google Docs, have introduced real time collaborative editing to users and have become widely successful. These new methods collaborative lead to effective teamwork and an improved product. Given these trends it’s likely that more developers will not only begin coding more cloud applications but that these applications themselves will be coded in the cloud as well [1]. The focus of my project is on the creation and analysis of an IDE which takes full advantage of this ideology and the collaborative benefits it provides. The program’s development title is Link IDE.
1.2 Justification
Although for smaller groups of programmers it may not be ideal, currently, the best way to collaborate on a software project is with the use of a source control system. There are a few ways to setup a source control system and a few options for that. Common choices are SVN, CVS, and a now popular system named Git. CVS and SVN
require setting up a central location for version control info and everyone must connect
and write there. Git offers an alternative by having a simple .git folder in your project di-
rectory which each user writes to and then can later merge with another users'. The key
thing to understand here is that in order for a user to have source control in their project
they must understand source control. They have to spend time researching which pro-
gram is most suitable for them, how to set it up, and how to use it. Of course this is trivi-
al work in a large software development team where the main project is in development
for many months. But consider a small group of people, maybe even a pair of program-
mers. They have been assigned a programming assignment. They each have their own
laptop and they want to start coding. If they wish to work effectively as a group their only
option, other than merging manually, is a source control system. If it is their first time
using a source control system they have to take the time to set it up. This may take a
very long time for someone to figure out the first time. And imagine the project is not
very large and with not that much coding necessary. Setting up the source control might
end up taking almost as much or even more time than the project. But even with the
system already in place, time can be lost using these types of systems. Often times
conflicts in code can occur. Imagine again the pair of programmers. They are working
primarily on one file in the project. They wish to add random implementations here and
there as they are working side by side. They may overlap certain sections of code and
form conflicts. These conflicts may or may not resolve at the time of checking in. This
could lead to potentially frustrating problems and more time spent on source control re-
lated issues. The thing once again to understand is that for these two programmers a
source control system is not ideal but it is the only option. They spend too much time
setting it up, taking occasional breaks to check code in, and possibly spending time re-
solving conflicts. But what other option do they really have? This is where my project would be most useful. In small scale projects setting up a source control system can become more work than it’s worth. The constant reminder that you have to check in code to avoid conflicts and to distribute code to teammates can completely disrupt the programming flow. In my program, the pair of programmers would not need to spend any time setting up a system. They would be able to bounce ideas off each other in real time by writing code and seeing each other's edits. Collaboration on a project like this should be a simple process that should only increase productivity and not waste time. There are no conflicts in this system as conflicts arising can be seen in real time by the users.
The use of a real-time collaborative IDE where users can write and build code simultaneously would greatly enhance programming teaching/tutoring sessions. If you have ever tutored programming, especially remotely, you may have already thought about this sort of idea. The tutor and student would connect to each other and start a project. The tutor can showcase some ideas and programming without having to be at or type on the computer with the student. When the student makes a mistake the tutor can quickly fix it while the student continues. This type of instant feedback can greatly increase the effectiveness of a tutoring session.
Aside from how real-time editing can be a valuable substitute for source control and a valuable addition for tutoring, Link IDE’s other collaborative features also make it a good development choice. However, one cannot simply add synchronous editing to an IDE and think of it as a collaborative environment. Multiple users editing the same document sounds extremely chaotic and when working with delicate structures like programming projects it can get very messy. To organize the collaboration there are a
number of major features including a task list, personalized user names, network storage of project, and chat features. Web storage of code is especially useful because it gives developers access to code and offers a less expensive build infrastructure and opens the potential for an expandable development [1]. The combination of all these features, which will be explained in detail later, truly forms a collaborative environment which should be a common choice of development for programming projects today.
2. Related Work
As a part of my project it is important to examine the already existing implementations of a collaborative integrated development environment (IDE). I decided to showcase two browser based implementations and then a plug-in that has been developed for the very popular Eclipse IDE.
2.1. Browser Based Collaborative IDEs
2.1.1. eXo Cloud IDE
eXo Cloud IDE is a browser based development environment that allows for the collaborative development of applications that can be deployed directly to a Heroku PaaS environment [11]. Heroku is a very powerful web based platform for Ruby, JavaScript including node.js, and they have recently added Java web app support. They boast that deploying directly within a PaaS environment allows for quick migration from the development stage to deployment [11]. It also contains a real time collaborative editor for use with up to five people.
2.1.2. Cloud9 IDE
Cloud9 IDE is a web based IDE for Javascript and Node.js applications along with HTML, CSS, PHP, Java, Ruby and 23 other languages [12]. Cloud9, like eXo Cloud, is a browser based implementation of a collaborative IDE. Cloud9 IDE is primarily for Node.js and Javascript developers. Cloud9 has many of the features available in popular IDEs such as syntax highlighting, the ability to run and debug code, and keyboard shortcuts [12]. It has formed a fairly large user base by now and is pretty successful.
2.1.3. Comparison to Link IDE
Obviously it is hard to compare my own project which has been worked on by only myself in a short span to implementations that have been in development for years with many developers on staff. However just as some of their features are advantageous to my approach, I do feel that some of the differences between my implementation and theirs can highlight the benefits in my choices. Although browser based solutions are inherently more accessible, I feel many users might find it awkward to work in an only browser setting. There are also various limitations to working in a browser. Browsers can be very limited when it comes to performing computation on a local level and thus rely on servers to process much information [14, 364]. Mark Silver describes the various usability issues with browser based applications in "Browser-based Applications: Popular but Flawed?". These include ambiguity between browser functions and the application functions (Back button), performance issues regarding screen updates, reliance on page orientation, and the inherent statelessness of web pages [14]. Cloud9 IDE and eXo Cloud IDE may or may not suffer from these problems. One
thing worth noting is that these IDEs focus mainly on web based development. Development of more performance intensive code such as mobile apps, graphic related applications, or even artificial intelligence type programs that train and predict while using intense processing might definitely require a desktop application based IDE which takes full advantage of available computing power. Although the browser based model is useful for web based development I do believe that for many situations a more traditional desktop application might be preferred as the model for a collaborative IDE.
### 2.2. Eclipse Communication Framework Project
The Eclipse Communication Framework project (ECF) is a project aimed at creating distributed applications for Eclipse. One part of this specifically is the Cola plug-in which allows for synchronous editing between users in Eclipse [13]. The Cola plug-in can be seen as one very important step in creating a collaborative IDE. Eclipse is already such a developed IDE that millions of people enjoy using for development. It is an incredible step forward to begin to integrate these types of features into Eclipse. The feature is at this time only a plug-in that must be downloaded, installed, and configured to work with Eclipse. Although it is a very useful plug-in it is only one step in the process of collaboration. I included features such as a task list and chatting in my program along with the synchronous editing so that users could organize their work in an efficient way. Having a task list allows the collaboration to live on even when there are not multiple users currently signed in. I think the Cola plug-in is a remarkable tool but
it does not necessarily qualify the Eclipse IDE as a collaborative environment.
3. Research Topics
3.1 Real Time Collaborative Editing
A key feature of the my project’s implementation was allowing users to simultaneously edit a document and see each other’s edits as they appear. This is known as real time collaborative editing (RTCE). RTCE has a long history and actually appeared first in 1968. For over 40 years it was largely overlooked as an important tool because of performance issues. With the advent of Web 2.0 and web applications such as Google Docs, RTCE has become a very beloved feature among teams. Google docs is the most successful real time collaborative editor so far. It has revolutionized the way students work on group reports, excel sheets, or presentations. This type of collaborative editing has been slow to be adopted by programmers. Recently the code behind Etherpad, the company who implemented the technology that serves as the basis for Google Docs’ RTCE, was made open source. This, along with many other advances in the field, has opened the door for the development of new RTCE applications. As a result more and more applications dealing with word processing have added this feature [1]. The two most common and efficient means of implementing RTCE are two methods known as Operational Transformation and Differential Synchronization.
3.1.1 Operational Transformation
Operational transform is the breaking down of each action inside a text editor (insert character, delete, tab, enter) into a series of operations which are
each transformed to conform to the operation preceding them [4,472]. To illustrate this, the following rather simple example can be used, See Figure 1.
Suppose that two people at Site 1 and Site 2 are working in a collaborative setting with a text editor that currently contains the word “Car”. The user at Site 1 inserts the character ‘s’ as the first character at the same time the user at site 2 deletes the letter ‘r’. In operational transform this would yield the following two transformations sent to the server copy.
Suppose the server receives the 1st operation and then the 2nd operation. Inserting the 's' into the 0 position and deleting the character at position 2 will lead to the string sCr. This is obviously wrong because the second user wished to delete the letter 'r'. The idea of operational transform is that each operation be transformed with respect to previous operations so that concurrency can be achieved. In other words after the server parses the insertion of 's' at position 0 it will increment the position of each operation according to their position.
1. Insert[0,'s'] - Insert character 's' at position 0 (Remember Position)
Getting the server to correctly transform the operations is however more complicated than incrementing indices. In order to maintain consistency among the documents at all the sites collaborating on a single document there are a number of different models with underlying properties that must be followed.
An Example of the base consistency model is the causality convergence model.
Causality Convergence Model
- Causality Property: Makes sure all edits which are causally dependent produce the same effects as was intended in the collaboration process [4, 475].
- Convergence Property: Ensures all copies of a single collaboration document are equal (Operations have been applied to all sites) [4, 475]
To implement operational transform it is important to know that it requires a system of components. The two main parts of an operational transformation system are the transformation control algorithm and the basic transformation operations.
1. Transformation Control Algorithm
a. Generally determines the order of transformations
2. Transformation Operations
a. Insert, Delete, Enter
There are generally 2 different types of operational transformation control functions.
1. Inclusion Transformation: IT(O1, O2) or \( T(O_1, O_2) \), transforms O1 with respect to the effect of O2. [15]
2. Exclusion Transformation: ET (O1, O2) or \( T^{-1}(op_1, op_2) \), transforms O1 without respect to the effect of O2. [15]
This means each operation that is received by the server is processed in pairs in a sorted order according to when they were received. Concurrent operations must be analyzed and possibly transformed with the inclusion function. To determine if two operations need to be transformed one must decide if they conflict. According to Du Li and Rui Li. in their paper on operational transformation [4],
two concurrent operations are conflicting if they possess one of following characteristics
(1) It is a deletion operation operating on the same character.
(2) One of the operations is deleting the character and the other is updating it.
(3) Both operations update the same attribute of the same character [4,477].
There are various implementations of the inclusion transformation and it generally varies based on the type of file you wish to collaboratively edit.
OT transform is by far the most popular method in solving the real time collaborative editing problem. The following is a list of software that currently uses operational transform.
Collaborative plain text editors
- Subethaedit (commercial)
- Ace (free, open-source)
- Gobby (free, open-source)
- MoonEdit (free for non-commercial use)
Web-based applications
- Google Docs & Google Wave.
- EtherPad (Purchased by Google)
3.1.2 Differential Synchronization
Another interesting method of implementing real time collaborative editing is with differential synchronization. Differential synchronization is a symmetrical algorithm created by Neil Fraser at Google as a part of the Google Mob Write project.
Neil Fraser gives the visualization in Figure 4 and steps to go along with how exactly differential synchronization works.
1. Client Text is diff’d (get difference) against the Common Shadow.
2. This returns a list of edits which have been performed on Client Text.
3. Client Text is copied over to Common Shadow. This copy must be identical to the value of Client Text in step 1, so in a multi-threaded environment a snapshot of the text should have been taken.
4. The edits are applied to Server Text on a best-effort basis.
5. Server Text is updated with the result of the patch. Steps 4 and 5 must be atomic, but they do not have to be blocking; they may be repeated until Server Text stays still long enough.[5]
Differential Synchronization is a very interesting topic and alternative to OT. The algorithm does not have as much conceptual theory behind it as operational transformation as it is fairly new method composed of a few already established parts. Neil Fraser, the protocols designer, gives a wonderful Google Tech Talks lecture in the following video.
http://www.youtube.com/watch?v=S2Hp_1jgpY8
3.3 Cursor Preservation
As multiple people type text into the same text box, cursor behavior becomes important to control. This may not be obvious at first but consider you are typing at cursor position 20 on an editor and someone comes along and inserts five characters starting at cursor position zero. The position you were previously editing will have moved up position 25 but your cursor would have stayed at 20 and been left behind. This is necessary to be fixed before a collaborative editor is usable. There are two correct ways to sufficiently implement cursor preservation.
3.3.1 Absolute Referencing
Absolute Referencing, based on storing character and cursor offsets, is the most popular technique for cursor preservation [3]. The start and end characters of each users’ cursors are stored. If an insertion is received that starts at or before one of these points then the offsets are incremented by the length of the edit. If a deletion is received that starts before or at one of these points then the offsets are incremented by the
length of the deletion. Any deletions and insertions received after these points have no
effect on the cursor. The following is a simple example whose form was taken from [3]
showcasing absolute referencing.
The cursor is currently at offset 24, just before the word "filthy":
`Door sailiig, and the ^filthy doves
The following edits arrive from a remote user (strike-through represents a deletion, underline
represents an insertion):
`Door sailiig, and the filthy doves
Three characters were deleted and one was added. If there were no deltas made to the cursor off-
set, the cursor would shift by two characters:
`Door sailiig, & the fi^lthy doves
By subtracting three and adding one to the cursor location, the cursor is moved to the expected
location:
`Door sailiig, & the ^filthy doves” [3]
3.3.2 Context Matching
Re-Positioning a cursor to its previous location after a remote edit by finding its
previous context is known as context matching. In context matching a variable number
of characters before and after the cursor start and end location are remembered. The
location of the start and beginning points are also remembered. When a remote edit is
received the text is patched and a fuzzy match algorithm is executed in order to find the
location in the text that most closely matches the previously stored context. The algo-
rithm by default will take into account both the difference in context and the offset be-
tween the old cursor and possible new locations and use this as a tie-breaker for equal-
ly probable context matches. The following example whose form was taken from [3] can
accurately portray context matching.
Consider the the following text where ^ denotes the cursor position
```
Door sailiig, and the filthy toves
Did gyre and ^gimble in the wabe: CONTEXT = S:yre and E:gimble i
All simsy were the porogoves,
And the moth waths outgrobe.
```
And the following edit is received.
```
Door sailiig, and the filthy toves
All simsy were the porogoves,
| Did yre ^Gimble in the Wabe: [3]
And the moth waths outgrobe.
```
Following the edit the algorithm will begin searching the new text for the context at around position 52. Eventually it will find the position of the new cursor at position 80 with a Levenshtein distance of 4 compared to the old context.
Context matching performs fairly well although there can be problems when there are duplicate lines. Google Docs uses context matching as a part of its real time collaborative editing.
4 Designs and Implementation
4.1 GUI Decisions
4.1.1. GUI Framework
One very large consideration I had when starting my project was which GUI framework to use. This is a very important choice for a number of reasons. First off the choice of GUI framework basically dictates the coding style. How complex will the code be? Do new languages need to be learned to adopt the framework? These are two
questions to consider when choosing a framework. Secondly, each GUI has different strengths and weaknesses. Lastly there are always factors to consider such as compatibility with various operating systems. I was able to conglomerate the following possible choices for the GUI.
<table>
<thead>
<tr>
<th>Name</th>
<th>Windows</th>
<th>OSX</th>
<th>LINUX</th>
<th>Familiar Language</th>
</tr>
</thead>
<tbody>
<tr>
<td>Silverlight</td>
<td>Yes</td>
<td>Yes with Mono-light</td>
<td>Yes with Mono-light</td>
<td>1/2</td>
</tr>
<tr>
<td>WPF</td>
<td>Yes</td>
<td>Not at this time</td>
<td>Not at this time</td>
<td>1/2</td>
</tr>
<tr>
<td>GTK+</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Java Swing</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>WinForms</td>
<td>Yes</td>
<td>Yes with Mono</td>
<td>Yes with Mono</td>
<td>Yes</td>
</tr>
</tbody>
</table>
However this table is obviously inadequate in making a decision. I eliminated WinForms and Java Swing right away as their age renders them basically obsolete. GTK+ is the standard linux GUI toolkit. Its compatibility across all popular operating systems makes it very worthy of attention. However I did find its available tools and controls and complexity rather dated. I was also disappointed to learn that its most recent version GTK+ 3.0 was only available with C++, python, and various other functional programming language wrappers. At the end of the day I was looking at two main choices for my environment.
a. Silverlight
Microsoft Silverlight is a framework specified at creating applications that have heavy internet integration. Like WPF, Silverlight uses the Extensible Application Markup Language (XAML) programming language to describe frames and windows of the GUI. Although Silverlight applications can run outside a browser the vast majority of Silverlight applications run inside a browser. Silverlight is obviously the choice when building an internet application. Its cross compatibility with Linux and OSX when you use the ported version Mono-light is also a big plus.
b. WPF
Windows Presentation Foundation is a graphical subsystem for rendering user interfaces in Windows. WPF like Silverlight uses the XAML language to declare user interface objects and dynamically link these objects with items in the code. Each WPF file has an XAML file and a code behind file. WPF's most native language is C#. Using C# and XAML it is simple to create very dynamic, nice looking, and animated GUIs. The main downfall of WPF, although this may change, is that it is only compatible with windows at this time.
I ended up choosing WPF as the framework I would work in. This was mainly due to the fact that I felt more comfortable using it and felt Silverlight, although attractive with its cross compatibility, its browser driven platform might not be suitable for this project.
4.1.2. Docking Panels
The docking system of an IDE is very important as it gives users the freedom to decide what their user interface looks like and how they interact with it. To handle dock-
I once again looked to an open source control for docking called AvalonDocking. It behaves almost identically to Visual Studio 2010 which was in fact coded in WPF. Panels can be docked to the bottom, right, left, top and outside of the program itself as a stand-alone window.

**Figure 5 – Docking Capabilities**
### 4.1.3. Code Editor
The text editor of an IDE is one of its most important members. A capable programming text editor is however quite complicated. It must handle syntax highlighting, line numbering, and in many cases code completion. In this project the text editor component has added complexity due to the aspiration for text editing to be collaborative.
between many people in real time. This means the text editor is the essential component of this project.
The process of creating a line numbered and syntax highlighting text editor can be time consuming. Instead of trying to mimic current implementation I decided to use a popular open source “code style” text editor called Avalon Edit. Avalon Edit supports syntax highlighting in multiple languages such as XML, Java, C++, and C#. It supports line numbering and has a built in mechanism for implementing a code completion. I felt this text editor was perfectly suited to my needs. I did however change much of source code in order to work with the networking aspect of the program.
4.2 Persistence
To keep track of individual projects, users, tasks, and other user data my project takes advantage of many popular methods for persistence.
4.2.1. SQL Database
When a user first opens the Link IDE, he or she will see the project data screen. In this screen users can either create a project or resume an existing project. To resume an existing project they double click the project name as it appears in the left. At this time the user will be prompted to enter the project password. After they have entered the correct password they will reach a user login screen for that project where they must login as a specific user. Each project that is created is stored in a web-hosted SQL database in a table named Projects. Here the password is stored. Each project’s info is loaded from SQL when the program starts up. The usernames for that project are also stored in a SQL table named Usernames with a foreign key pointing to the project they belong to. The usernames are loaded once the user enters the correct password for the project. Using a SQL server for this information is an efficient method of storing project and user information.
Users can also create tasks for other users or themselves. These tasks designate work that needs to be done. In order to have constant access to the list of tasks each task and its information is stored in a SQL table ProjectTasks with a foreign key pointing to the project they belong to.
4.2.2. FTP Server
The storage of Link project files on the web is handled by a dedicated FTP server. When a user selects an existing project and enters the correct authentication then the files are downloaded from the folder on the FTP server with the name of the project. It will then create a folder for this project in the selected Link project workspace. On the creation of a new project a folder is created on the FTP server for that project name and a folder is created in the user’s workspace. The class that handles the FTP operations runs the FTP requests in a background thread so as to not to disturb the UI.

The presence of the FTP server is very useful because it allows users to have their code backed up without having to worry about it themselves. It also allows them to pick up where they left off no matter what computer they happen to working with.
4.2.3. Storing local data
Local data such as where the workspace for projects is stored are written to an isolated storage file. Isolated storage files are kept in a computer’s hard drive memory and persist. This is useful because the user should have to select a new workspace upon using the program on a new computer.
4.3 Compilation and Building of Code
As of right now Link IDE supports only one language, java, and runs via the user's java compiler. When I started the project I realized that I should probably focus on one language and Java happened to be the one language I knew most about. A typical IDE does not usually come packaged with a compiler. It will expect users to have the required compilers installed on their systems in order for them to develop inside Link. Not including a compiler reduces complexity and reduces installation size to keep the program quick and responsive.
With that being said, the compilation and running procedures are actually very simple. As files are added to the project, the program tries to recognize which files are for code and add them to a sort of list to be compiled. The user must then mark the main class by right clicking and selecting “Set As Main” in the context menu that pops up. Then upon pressing compile the code is compiled and outputted to the output pane at the bottom. Any errors will be found in the error tab at the bottom in the same pane as the output (Figure 10).
In order to compile code, the java compiler exe is started with the arguments being the source code of the current project. Similarly, to run the code the Java exe is ran with the main file as the argument. Both of these actions are done via Processes in C# (See Code Example 1).
```csharp
Code Example 1 - Starting Java and Javac via Processes in C#
Errors received at run time are simply printed to the output window. Errors at compile time however are more complex to deal with. The compile error output stream must be read and parsed to separate errors and store their specific information. After the errors have been parsed they are stored in a collection that is shown the error listview which has details such as Name, Line Number, and Error Description (Figure 10)
<table>
<thead>
<tr>
<th>File</th>
<th>Line Number</th>
<th>Error Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>IfThenStatement.java</td>
<td>15</td>
<td>error ';' expected</td>
</tr>
<tr>
<td>LetStatement.java</td>
<td>49</td>
<td>error ';' expected</td>
</tr>
<tr>
<td>MultiStatement.java</td>
<td>21</td>
<td>error illegal start of expression</td>
</tr>
<tr>
<td>MultiStatement.java</td>
<td>21</td>
<td>error ';' expected</td>
</tr>
<tr>
<td>MultiStatement.java</td>
<td>23</td>
<td>error reached end of file while parsing</td>
</tr>
<tr>
<td>ReturnStatement.java</td>
<td>10</td>
<td>error ';' expected</td>
</tr>
</tbody>
</table>
Double Clicking on an error will automatically open or switch focus to the file containing the error.
4.4 Collaborative Implementations
4.4.1 Project/User Authentication
Requiring project authentication based on a project password and usernames offers obvious security benefits but also identifies each user and allows the system and users to analyze and assign work respectively. Because one is logged in when one makes changes, the system could potentially track user activity and efficiency. This might include graphs or bar charts showing the number of edits each user has done. This would be useful to tell who might not be doing enough work or who is
doing too much. Most likely a feature like this would influence people to work more to stay in balance with the others. More importantly assigning users each a personal name allows users to communicate more effectively whether it is by chatting or by delegating tasks.
4.4.2. Task List
The inclusion of a task list is a necessity to organize the otherwise chaos synchronous editing on the same document. Upon logging into a project users can start delegating tasks to other users by creating them in the create task menu. A task has a Name, Owner, Class File where task is to be implemented, Priority, Date due, and a Description. After a task is created, a signal is sent to the server and each user is triggered to refresh their task list. Tasks for the current user will then appear in the left pane along in the tab titled TaskList (Figure 11).

Figure 11 – Create Task
Tasks for the entire project are also visible via a task overview list. This will show each task's information. This operates in two different views, a list view and a tile view. The tasks are color coded depending on their progress. Finished tasks are green, in progress tasks are yellow, and not started tasks are red.
Double clicking a task will open a window where that task can be modified and updated (Figure 13). Updates are simultaneously received by all users at the point of update. Left clicking a task will bring up a context menu where the task can be deleted (completed task and by owner only) and where the update menu can be found again (Figure 13).
4.4.3. Chatting
Chatting through a global chat room and instant messaging is an important tool of communication between users that increases collaboration. In link there are two window panes which are vertically bounded at the bottom. One of these is to enter a chat message and the other is to view all the chat messages of the users (Figure 14). Docked to the left side of the window along with the task list and project explorer tree is a Messenger tab. This tab contains all the names for the users currently connected to the project. Double clicking a username will open an instant message window where one can hold a private conversation with only that user. Having this feature allows users to coordinate work on tasks that they may share without typing it directly into the real-time editor.
Figure 14 – Chat Functionality
Chatting with users operates through the same server as the real time collaborative editing. The server used for this was a Windows Communication Foundation (WCF) based server. WCF is a service oriented API for .Net that has excellent compatibility with WPF applications. The chat service implemented a few different methods relating to chat functionality.
```csharp
public void Say(Message msg)
{
lock (syncObj)
{
foreach (IChatCallback callback in clients.Values)
{
callback.Receive(msg);
}
}
}
public void Whisper(Message msg, Client receiver)
{
foreach (Client rec in clients.Keys)
{
if (rec.Name == receiver.Name)
{
IChatCallback callback = clients[rec];
callback.ReceiveWhisper(msg, rec);
}
}
}
```
**Code Example 2 – Say and Whisper message server side.**
These two methods implemented by the chat server are called by the user when they want to send a message to the chat box. When a say message is sent to the server it is then broadcasted to the entire list of users connected. When a whisper message is sent to the server it is sent only to the recipient. For the server to broadcast these messages
it needs to have the callback object call its Receive and ReceiveWhisper messages. The callback object is each client that has subscribed to the service and is implementing the service’s interface. For chat functionality this interface requires that the user implement the following functions (Code Example 3) [17].
```csharp
[OperationContract(IsOneWay = true)]
void RefreshClients(List<Client> clients);
[OperationContract(IsOneWay = true)]
void Receive(Message msg);
[OperationContract(IsOneWay = true)]
void ReceiveTaskChangeSignal();
[OperationContract(IsOneWay = true)]
void ReceiveWhisper(Message msg, Client receiver);
```
**Code Example 3 – Call back methods on server side to be implemented by client**
On the client side these implementations are fairly trivial and involve simple adding the messages received to UI components such as list boxes.
### 4.4.4. Real Time Collaborative Editing
In order to make the code editor inside the Link IDE a real time collaborative editor I roughly followed and implemented the Google Mobwrite protocol. Implementing this required me to have the three main functionalities of the protocol present. These functionalities include a version of differential synchronization, a difference and patching algorithm, and a cursor preservation method.
4.4.4.1. Difference and Patching algorithm
Neil Fraser, the designer of the Mobwrite protocol, has open sourced much of his work and included in that are the difference and patching algorithms for Mobwrite. I used his C# implementations as a part of the WCF server that controls most of the real-time collaborative functions in my project. Inside the service there are two methods that needed to form a patch and apply the patch (Code Example 4). First a diff_match_patch() object is created. This object can then call the function patch_make(String old, String new) in order to find the difference between the two strings and create “patch” object. This patch object can then be applied with function patch_apply(Patch p, String toBePatched). The following example shows how this would work.
```csharp
diff_match_patch dmp = new diff_match_patch();
String toBePatched = "This is to be patched";
dmp.apply(dmp.patch_make(toBePatched, "This was to be patched"), toBePatched);
Console.WriteLine(toBePatched) => "This was to be patched"
```
**Code Example 4 – Creating a diff and patching it into text**
Using this I created a function diffAndPath(String newText, String filename) inside my program which return an Array object containing the patched text and the edit length (Code Example 5).
```csharp
private Object[] diffAndPatch(String newText, String fileName)
{
String oldText = textFile[fileName];
Object[] results = dmp.patch_apply(dmp.patch_make(oldText, newText), oldText);
textFile[fileName] = (String)results[0];
int editLength = ((String)results[0]).Length - oldText.Length;
results[1] = editLength;
return results;
}
```
**Code Example 5 – Server side function for finding difference and patching it**
The Edit Length is especially important for cursor preservation
4.4.4.2. Differential Synchronization
The implementation of differential synchronization in my project varies in a few ways from how it is implemented in the Google Mobwrite protocol. Neil Fraser initially designed the algorithm with dropped packets on the internet in mind and thus it does some extra work to ensure consistency among all parties text. For use in my project I simplified the protocol.
The differential synchronization process runs on the same WCF server as the project’s chat functionality and new task signaling. The first step in the implementation was to put a listener on the code editor of the program. Opening a file will automatically establish that file in the server. Establishing a file means that the server will now track its contents and allow multiple users to edit those contents. Upon entering any text in the editor this method is called (Code Example 6).
```csharp
private void editorTextInput(object sender, EventArgs e) {
if (this.localClient != null) {
SendTextChanges(Editor.Text.ToString(), Editor.Tag.ToString(), this.localClient, Editor.CaretOffset);
}
}
```
**Code Example 6 – Listener placed on text editor to send text changes to server**
This method will send the text changes, along with the files name and the cursor position of the edit to the server. Upon receiving the message the following method will be executed on the server side (Code Example 7).
```csharp
public void SendText(String text, String fileName, Client client, int carretPositionOfEdit)
{
Object[] newTextAndLength = diffAndPatch(text, fileName);
foreach (Client sender in clients.Keys)
{
if (sender.Name != client.Name)
{
IChatCallback callback = clients[sender];
callback.ReceiveText((String)newTextAndLength[0], fileName, (int)newTextAndLength[1], carretPositionOfEdit - (int)newTextAndLength[1], carretPositionOfEdit, false);
}
}
}
```
**Code Example 7 – Server side response to text changing**
This method first forms the diff between the new and old text, which is stored on the server, and then goes on to apply the path (Code Example 4). It then goes on to broadcast the changes to each client except for the sender by sending the callback message ReceiveText. The ReceiveText is then executed on the client side.
```csharp
public void ReceiveText(String text, String fileName, int editLength, int carretPositionOfEditStart, int carretPositionOfEditEnd, bool tag)
{
int newCarretOffset = 0;
bool changeCarret = false;
if (openedDocuments.ContainsKey(fileName))
{
if ((openedDocuments[fileName]).Equals(editor.SelectedItem) && !tag)
{
int differenceBetweenCarretPositions = carretPositionOfEditStart - openedDocuments[fileName].Content).CaretOffset;
if (differenceBetweenCarretPositions < 0)
{
```
newCarretOffset = openedDocuments[fileName].Content.CaretOffset + editLength);
changeCarret = true;
}
}
//Remove the Text editor listener before adding text and then re-apply the listener
openedDocuments[fileName].Content.TextChanged -= editorTextInput;
openedDocuments[fileName].Content.Text = text;
openedDocuments[fileName].Content.TextChanged += editorTextInput;
//If carret should be changed then change it
if (changeCarret)
(openedDocuments[fileName].Content).CaretOffset = newCarretOffset;
}
**Code Example 8 – Client method for receiving text and adjusting cursor**
Receiving the text and changing the editor to represent the new text changes is the easy part in this method. The complicated handling in this method comes from trying to preserve the cursor.
**4.4.4.3. Cursor Preservation**
Cursor Preservation as explained earlier is the necessity to move the cursor algorithmically every time an edit is made to the text in a collaborative editor. Without controlling this behavior in the correct manner, cursor locations might jump and move and the editor becomes basically unusable. For the purposes of my project I implemented an Absolute Reference cursor preservation method that is similar to the method used in the retired Google Wave.
In Code Example # 6 you can see that when a text change signal is sent to the server the cursor position of the edit is also sent as well. Following that the server determines the length of the edit and relays this information along with the cursor position.
to each receiver. It is important to know the position of the edit and the length because we want to determine if the edits full encompassing length and position intersect with any other cursors in the program. If they do intersect their cursors are moved up according to the length of the edit. This is an efficient and clean method because it works for deletion just the same as additions.
5. Usability Testing
In a further step to validate my work on the Link IDE I found it imperative to carry out a usability test with the input of a few real life users. I called on the help of 3 users to test the program. User 1 is a graduate student of biomedical engineering. User 2 is a middle aged high school computer science teacher. User 3 is a graduate student in computer science. I tested each User separately three different times using three different tests where I would be present each time so that we could work together in a pair programming model. After these tests I scheduled a test with all three users and myself present and created a sample project. This final step was recorded on video and is available as a link referenced in this paper [16].
5.1. Test one – Project Creation, Resume project, Build Project
Test one involved asking the user to first create a project and add a few files. Next the user would exit the program and then resume the project. Finally the user would be asked to write some sort of simple program (ie Hello World) and then build and run it.
5.1.1. User 1
User 1 opened the program and initially did not know how to begin. Before using the “Create New Project” page he tried to do a “File - > New Project” route but found out it did not exist. After a short while he noticed the new project page and began filling out the necessary information. He had no trouble creating the project and creating his username. He found adding new files or existing files easy. There was no problem resuming the project or writing code but building the project was at first confusing. User 1 needed help in order to realize you had to set the main class of your program before running it.
5.1.2. User 2
User 2 had no problems creating a project. Upon resuming he realized he must have mistyped his password and could not get back in. After making a new user and exiting the program, he was able to login to the new user. Writing the program was very swift but he also ran into problems when attempting to run the program as he had not set the main class yet.
5.1.3. User 3
User 3 had little problems with the entire test. Because he spent time right clicking the project files he saw the context menu option to set a file as main and thus figured you must do that. He did however comment that he could see how that could be overlooked.
5.1.4. Analysis of Test 1 results
The main problem users encountered in the first test was that they did not realize that you had to set the main class that will be run. This is an obvious shortcoming of my project and it was something that I did not get to in the time I had. Because the program was still functional without it I left it as is. In a future version this will be fixed. User 2’s interesting dilemma with his password being lost made me think I might want to add some sort of password recovery service.
5.2. Test two – Synchronous editing
In this test I asked the users to start a server and have me connect. Then they would disconnect and connect to a server I would start. Then we would both open the same document. The user would add a print statement with the variable “x” and I would initialize the variable and then we would run the program.
5.2.1. User 1
User 1 had problems figuring out his own IP when attempting to host the collaborative server and had to be helped. Connecting to my hosted server went without problem. The editing worked nicely.
5.2.2. User 2
User 2 also had some issues at first when hosting his own server but quickly understood. Connecting to my server was no problem. The editing was interesting at first because we were both editing in same area but quickly we adapted.
5.2.3. User 3
User 3 had no problem hosting or connecting to the server and found the synchronous editing to be responsive and impressive.
5.2.4. Analysis of test
This test was short and was mostly just to familiarize the user with the basic synchronous testing. Based on the confusion with IPs, it might be useful to add an automatic local IP recognizer.
5.3. Test three – Communication, Task Lists, Overview
The third experiment involved having users test the chat functionality as well as the task list. The test was also kind of an overview test because it incorporated most of all the functions of the application. Users would be asked to host a server for myself to connect to, complete the task I assign them and assign me a task to complete. During this test I would not be in the room with the user so that the chat functionality had to be utilized.
5.3.1. User 1
User 1 did not have a problem resuming the project from earlier and remembered how to host a server this time. User 1 had a bit of an issue seeing which tasks were assigned to him but after receiving instructions via the applications chat window it was not a problem. Assigning a task was not an issue.
5.3.2. User 2
User 2 had no problems resuming the project and found the task screen right away and assigned myself a task. I assigned him a task and he was able to find it by exploring the applications window panes.
5.3.3. User 3
User 3 did not encounter any issues resuming his project. He was able to add tasks easily and realized quickly where the tasks showed up as they were added. User 3 had little problems with test 3.
5.3.4. Analysis of test
Test 3 was more of an introduction for the users to the task list and chat functionalities than a usability test. It was important to lay the groundwork and establish the understanding for the final step of usability testing.
5.4. Final Test
For the final step of usability testing I scheduled a time where all 3 users could be online at the same time so that a project could be worked on collaboratively together. The program assignment would be a simplified GUI calculator. The GUI would be composed up of 3 text fields and 4 buttons. There would be two text fields for the left and right numbers and one text field for the result. Each of the 4 buttons would compute the result of the addition, subtraction, multiplication, and division of the two numbers and display them in the result box.
After letting the users deliberate and decide how they were going to pro-
gram I told them to record their work on the project. The video turned out to be
an excellent way to see how efficient work could be done using this tool. This
video is available in the link referenced in [16].
6. Future Work / Implementations
This section is dedicated to ideas I have had during my time working on the project
that I did not have enough time to implement. If I ever make a real push to get Link IDE
ready for a release then some of these will be a necessity.
6.1. Synchronous Locking
It may be useful to include some sort of mechanism for locking a specific docu-
ment in the project. This would allow team members who do not wish to be disturbed
by other users to work alone on a certain class file.
6.2. Concurrent Performance
It would be useful to parallelize all the code in the program in order to receive
performance boosts. Network operations such as downloading project files from FTP
could also be parallelized in order to increase the latency. Having separate servers
for chatting and real time collaborative editing may increase robustness and speed
as well.
6.3. Multiple Cursors Visible
One thing my program does not contain is the ability to see where each user's cursors are at and what their current selections are. This type of behavior would work similarly to Google Docs and would be beneficial to see possible conflicts in positioning edits.
6.4. Fully Web Hosted Servers
As of right now I have set up a single web hosted server in my own home to hold examples for outside of network. In the future I would want to deploy a web hosted version of the server so that users would not have to open ports to work with people on outside networks.
6.5. Fully Designed GUI.
Although the GUI is roughly designed at this point it is not complete. Interesting and aesthetically pleasing elements still need to be added. Certain UI elements can still be modified to have custom styles so they do not look like common WPF user interfaces. However this is something that would be done near the end of development when all the functionality is complete and mostly bug free.
6. Conclusion
The Link IDE has been useful to showcase the possibilities of real time collaboration inside an IDE. Having organized user and project authentication along with web storage for projects allows for work to be continued at any location. Real time collaborative editing can be a new fresh air approach to source control that may suit some parties better. Combining this type of synchronous editing with a task list that users can easily access and follow gives project members a good idea of what needs to be done and allows work to move forward in an organized manner. There simply is no reason that there shouldn’t be an IDE that allows users to work together in such a close environment. In the next few years it is inevitable that more and more software moves to real time collaborative formats including IDEs [7]. The most popular and actually used forms of existing implementations of collaborative IDES including Cloud9 and eXo are browser based. Although browser based applications are naturally more accessible than traditional desktop applications they are also limited in their processing power. Certainly graphics oriented, data intensive, or mobile applications could at this time not be developed in a browser. For this reason I think that traditional desktop collaborative IDE’s like Link IDE will also become popular in the coming years. It is also necessary that future collaborative IDEs focus not only on being web accessible but also offering tools for organizing collaboration outside of the cloud such as task lists. The Cola plug-in for Eclipse to allow for synchronous editing is a mind blowing tool but it would be nice to see a product similar to Eclipses quality or even a version of Eclipse that supports this and other collaborative features out of the box. In the future I hope to work more on Link IDE and possibly with the help of other developers make it ready for a release.
7. References
[3] Cursor Preservation, Neil Fraser
[http://neil.fraser.name/writing/cursor/](http://neil.fraser.name/writing/cursor/)
doi: 10.1109/TPDS.2008.240
[9] Avalon Edit
[http://wiki.sharpdevelop.net/AvalonEdit.ashx](http://wiki.sharpdevelop.net/AvalonEdit.ashx)
[10] Avalon Dock
[11] eXo Cloud Ide Website
[12] Cloud9 IDE Website
http://c9.io/
http://live.eclipse.org/node/543
[16] Usability Test Video
http://www.youtube.com/watch?v=pHKuf6Rt5MQ&feature=youtu.be
http://www.codeproject.com/Articles/25261/A-WCF-WPF-Chat-Application#xx0xx
8. Figures and Code Example Page Reference
8.1. Figures / Images
- Figure 1: – Before Edits
- Figure 2: – After edit with no transformation
- Figure 3: – After operation transformed
- Figure 4: – Overview of Differential Synchronization
- Figure 5: – Docking Capabilities
- Figure 6: – Code editor
- Figure 7: – Sql Projects Usernames and create new project Page
- Figure 8: – SQL table and Create Task List
- Figure 9: – FTP relationship for projects
- Figure 10: – Error list
- Figure 11: – Create Task
- Figure 12: – Two task list view versions
- Figure 13: – Update Task
- Figure 14: – Chat Functionality
8.2. Code Examples
- Code Example 1: – Starting Java and Javac via Processes in C#
- Code Example 2: – Say and whisper messages server side
- Code Example 3: – Call back methods on server side to be implemented by client
- Code Example 4: – Creating a diff and patching it into text
• Code Example 5: – Server side functions for finding difference and patching it
• Code Example 6: – Listener placed on text editor to send text changes to server
• Code Example 7: – Server side response to text changing
• Code Example 8: – Client method for receiving and adjusting cursor
|
{"Source-Url": "https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1223&context=etd_projects", "len_cl100k_base": 13063, "olmocr-version": "0.1.53", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 84575, "total-output-tokens": 15977, "length": "2e13", "weborganizer": {"__label__adult": 0.0004353523254394531, "__label__art_design": 0.0004715919494628906, "__label__crime_law": 0.00027251243591308594, "__label__education_jobs": 0.006183624267578125, "__label__entertainment": 8.70823860168457e-05, "__label__fashion_beauty": 0.00016736984252929688, "__label__finance_business": 0.00022923946380615232, "__label__food_dining": 0.00038361549377441406, "__label__games": 0.0006899833679199219, "__label__hardware": 0.0008654594421386719, "__label__health": 0.0003306865692138672, "__label__history": 0.0002244710922241211, "__label__home_hobbies": 0.0001798868179321289, "__label__industrial": 0.0002684593200683594, "__label__literature": 0.00026917457580566406, "__label__politics": 0.00017130374908447266, "__label__religion": 0.0004427433013916016, "__label__science_tech": 0.0031185150146484375, "__label__social_life": 0.00031113624572753906, "__label__software": 0.006938934326171875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00030994415283203125, "__label__transportation": 0.00046753883361816406, "__label__travel": 0.00022399425506591797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63681, 0.0359]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63681, 0.31575]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63681, 0.92843]], "google_gemma-3-12b-it_contains_pii": [[0, 744, false], [744, 1041, null], [1041, 1041, null], [1041, 2039, null], [2039, 3672, null], [3672, 4327, null], [4327, 5182, null], [5182, 5956, null], [5956, 7653, null], [7653, 9757, null], [9757, 11687, null], [11687, 13097, null], [13097, 14820, null], [14820, 16509, null], [16509, 18074, null], [18074, 18599, null], [18599, 19660, null], [19660, 21103, null], [21103, 22280, null], [22280, 22811, null], [22811, 24448, null], [24448, 26097, null], [26097, 27342, null], [27342, 28776, null], [28776, 30347, null], [30347, 31055, null], [31055, 31740, null], [31740, 32899, null], [32899, 33189, null], [33189, 34433, null], [34433, 36112, null], [36112, 37816, null], [37816, 39037, null], [39037, 39382, null], [39382, 40215, null], [40215, 41437, null], [41437, 42735, null], [42735, 44495, null], [44495, 45764, null], [45764, 47439, null], [47439, 48961, null], [48961, 50448, null], [50448, 51730, null], [51730, 53055, null], [53055, 54238, null], [54238, 55491, null], [55491, 56659, null], [56659, 57673, null], [57673, 59594, null], [59594, 61680, null], [61680, 62496, null], [62496, 63392, null], [63392, 63681, null]], "google_gemma-3-12b-it_is_public_document": [[0, 744, true], [744, 1041, null], [1041, 1041, null], [1041, 2039, null], [2039, 3672, null], [3672, 4327, null], [4327, 5182, null], [5182, 5956, null], [5956, 7653, null], [7653, 9757, null], [9757, 11687, null], [11687, 13097, null], [13097, 14820, null], [14820, 16509, null], [16509, 18074, null], [18074, 18599, null], [18599, 19660, null], [19660, 21103, null], [21103, 22280, null], [22280, 22811, null], [22811, 24448, null], [24448, 26097, null], [26097, 27342, null], [27342, 28776, null], [28776, 30347, null], [30347, 31055, null], [31055, 31740, null], [31740, 32899, null], [32899, 33189, null], [33189, 34433, null], [34433, 36112, null], [36112, 37816, null], [37816, 39037, null], [39037, 39382, null], [39382, 40215, null], [40215, 41437, null], [41437, 42735, null], [42735, 44495, null], [44495, 45764, null], [45764, 47439, null], [47439, 48961, null], [48961, 50448, null], [50448, 51730, null], [51730, 53055, null], [53055, 54238, null], [54238, 55491, null], [55491, 56659, null], [56659, 57673, null], [57673, 59594, null], [59594, 61680, null], [61680, 62496, null], [62496, 63392, null], [63392, 63681, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63681, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63681, null]], "pdf_page_numbers": [[0, 744, 1], [744, 1041, 2], [1041, 1041, 3], [1041, 2039, 4], [2039, 3672, 5], [3672, 4327, 6], [4327, 5182, 7], [5182, 5956, 8], [5956, 7653, 9], [7653, 9757, 10], [9757, 11687, 11], [11687, 13097, 12], [13097, 14820, 13], [14820, 16509, 14], [16509, 18074, 15], [18074, 18599, 16], [18599, 19660, 17], [19660, 21103, 18], [21103, 22280, 19], [22280, 22811, 20], [22811, 24448, 21], [24448, 26097, 22], [26097, 27342, 23], [27342, 28776, 24], [28776, 30347, 25], [30347, 31055, 26], [31055, 31740, 27], [31740, 32899, 28], [32899, 33189, 29], [33189, 34433, 30], [34433, 36112, 31], [36112, 37816, 32], [37816, 39037, 33], [39037, 39382, 34], [39382, 40215, 35], [40215, 41437, 36], [41437, 42735, 37], [42735, 44495, 38], [44495, 45764, 39], [45764, 47439, 40], [47439, 48961, 41], [48961, 50448, 42], [50448, 51730, 43], [51730, 53055, 44], [53055, 54238, 45], [54238, 55491, 46], [55491, 56659, 47], [56659, 57673, 48], [57673, 59594, 49], [59594, 61680, 50], [61680, 62496, 51], [62496, 63392, 52], [63392, 63681, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63681, 0.02762]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a3cbb7346564580b2be09d7a4383dc8a524284a8
|
[REMOVED]
|
{"len_cl100k_base": 11196, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 51969, "total-output-tokens": 13339, "length": "2e13", "weborganizer": {"__label__adult": 0.0006756782531738281, "__label__art_design": 0.0006422996520996094, "__label__crime_law": 0.0007200241088867188, "__label__education_jobs": 0.0006885528564453125, "__label__entertainment": 0.0001609325408935547, "__label__fashion_beauty": 0.00029850006103515625, "__label__finance_business": 0.0003247261047363281, "__label__food_dining": 0.0005207061767578125, "__label__games": 0.0014781951904296875, "__label__hardware": 0.0102081298828125, "__label__health": 0.00098419189453125, "__label__history": 0.0006055831909179688, "__label__home_hobbies": 0.0002009868621826172, "__label__industrial": 0.0015468597412109375, "__label__literature": 0.00035309791564941406, "__label__politics": 0.0005450248718261719, "__label__religion": 0.00127410888671875, "__label__science_tech": 0.40625, "__label__social_life": 0.00010246038436889648, "__label__software": 0.00830841064453125, "__label__software_dev": 0.56201171875, "__label__sports_fitness": 0.000568389892578125, "__label__transportation": 0.0013666152954101562, "__label__travel": 0.0002903938293457031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51257, 0.0427]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51257, 0.49949]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51257, 0.88738]], "google_gemma-3-12b-it_contains_pii": [[0, 2541, false], [2541, 5705, null], [5705, 8505, null], [8505, 11692, null], [11692, 13523, null], [13523, 15326, null], [15326, 17783, null], [17783, 21205, null], [21205, 24197, null], [24197, 27419, null], [27419, 28070, null], [28070, 30881, null], [30881, 34337, null], [34337, 37557, null], [37557, 39740, null], [39740, 42922, null], [42922, 46138, null], [46138, 49232, null], [49232, 51257, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2541, true], [2541, 5705, null], [5705, 8505, null], [8505, 11692, null], [11692, 13523, null], [13523, 15326, null], [15326, 17783, null], [17783, 21205, null], [21205, 24197, null], [24197, 27419, null], [27419, 28070, null], [28070, 30881, null], [30881, 34337, null], [34337, 37557, null], [37557, 39740, null], [39740, 42922, null], [42922, 46138, null], [46138, 49232, null], [49232, 51257, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51257, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51257, null]], "pdf_page_numbers": [[0, 2541, 1], [2541, 5705, 2], [5705, 8505, 3], [8505, 11692, 4], [11692, 13523, 5], [13523, 15326, 6], [15326, 17783, 7], [17783, 21205, 8], [21205, 24197, 9], [24197, 27419, 10], [27419, 28070, 11], [28070, 30881, 12], [30881, 34337, 13], [34337, 37557, 14], [37557, 39740, 15], [39740, 42922, 16], [42922, 46138, 17], [46138, 49232, 18], [49232, 51257, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51257, 0.1]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
4f4542649d8508df33888741c736079ed0dc26b3
|
Lecture C5: Semaphores Shared objects, Monitors, Condition Variables, and Bounded buffer
Review -- 1 min
- Hardware support for synchronization
- Abstractions on top of hardware support (e.g., Lock)
- Shared objects
Outline - 1 min
Two kinds of synchronization
Monitor = lock + c.v. + shared state = shared object
Simple implementation
Best practices
Preview - 1 min
How to program with shared objects
Lecture - 32 min
1. Motivation
Writing concurrent programs hard – coordinate updates to shared memory
Synchronization – coordinating multiple concurrent activities that are using shared state
Question: what are the right synchronization abstractions to make it easy to build concurrent programs?
Answer will necessarily be a compromise:
• between making it easy to modify shared variables any time you want and controlling when you can modify shared variables.
• between really flexible primitives that can be used in a lot of different ways and simple primitives that can only be used one way (but are more difficult to misuse)
Rules will seem a bit strange – why one definition and not another?
• no absolute answer
• history has shown that they are reasonably good – if you follow these definitions, you will find writing correct code easier.
• for now just take them as a given; use it for a while; then, if you can come up with something better, be my guest!
2. Shared object abstraction
[PICTURE -- shared state, methods operating on shared state]
-- example -- bounded buffer/producer consumer queue
-- methods: add(), remove()
-- state: linked list (or array or ...), fullCount, ...
-- Accessed by several threads --> must synchronize access]
3. 2 “types” of synchronization
Convenient to break synchronization into two cases
(1) Mutual exclusion – only allow one thread to access a given set of shared state at a time
E.g., bounded buffer
How do we do it?
Each shared object has lock and shared state variables
Public methods acquire the lock before reading/writing member state variables
(2) Scheduling constraints – wait for some other thread to do something
E.g., bounded buffer....
General problem
e.g., wait for other thread to finish, wait for other thread to produce work, wait for other thread to consume work, wait for other thread to accept a connection, wait for other thread to get bytes off disk, ...
How do we do it?
Need new synchronization primitive "Wait until X"
4. Definition of Semaphores
like a generalized lock
first defined by Dijkstra in late 60’s
originally main synchronization primitive in Unix (now others available)
**semaphore**—has a non-negative integer value and supports the following two operations:
semaphore.\( P() \)—an atomic operation that waits for the semaphore to become positive; then decrements it by 1
semaphore.\( V() \)—an atomic operation that increments the semaphore by 1, waking up a waiting \( P \) if any
Like integers, except:
1) No negative values
2) Only operations are \( P() \) and \( V() \)—can’t read or write the value (except to set it initially)
3) operations must be atomic—two \( P \)’s that occur together can’t decrement the value below zero. Similarly, thread going to sleep in \( P \) won’t miss wakeup from \( V \), even if they both happen at about the same time
**binary semaphore**—instead of an integer value, has a boolean value:
\( P \) waits until value is 1, then sets it to 0
\( V \) sets value to 1, waking up a waiting \( P \) if any
5. Two uses of semaphores
5.1 Mutual exclusion
When semaphores are used for mutual exclusion, the semaphore has an initial value of 1, and P() is called before the critical section, and V() is called after the critical section.
```java
Semaphore = new Semaphore(1);
...
Semaphore=P();
//critical section goes here
Semaphore=V();
```
5.2 Scheduling constraints
Semaphores can be used to describe general scheduling constraints—e.g. they provide a way to wait for something usually in this case (but not always) the initial value for the semaphore is 0.
Example: Wait for another thread to get done processing a request
********************************************************************
Admin - 3 min
********************************************************************
6. Producer-consumer with bounded buffer
6.1 Problem definition
producer puts things into a shared buffer
consumer takes them out
need synchronization for coordinating producer and consumer
e.g., cp | cc1 | cc2 | as
e.g., read/write network/disk (e.g., web server reads from disk, sends to network while your web client reads from network and draws to screen)
Don’t want producer and consumer to operate in lock-step, so put a fixed-sized buffer between them.
Synchronization—producer must wait if buffer is full; consumer must wait if buffer is empty
e.g., coke machine
producer is delivery person
consumer is students and faculty
Notice: shared object (coke machine) separate from threads (delivery person, students, faculty). Shared object coordinates activity of threads.
Common confusion on project—try to do the synchronization within the threads’ code. No, the synchronization happens within the shared objects. “Let the shared objects do the work.”
Solution uses semaphores for both mutex and scheduling
6.2 Correctness constraints for solution
Synchronization problems have semaphores represent 2 types of constraint
- mutual exclusions
- wait for some event
When you start working on a synchronization problem, first define the mutual exclusion constraints, then ask “when does a thread wait”, and create a separate synchronization variable representing each constraint
**QUESTION: what are the constraints for bounded buffer?**
1) only one thread can manipulate buffer queue at a time
*mutual exclusion*
2) consumer must wait for producer to fill buffers if none full
*scheduling constraint*
3) producer must wait for consumer to empty buffers if all full
*scheduling constraint*
Use a separate semaphore for each constraint
```
Semaphore mutex;
Semaphore fullBuffers; // consumer’s constr
---------------------// if 0 no coke
Semaphore emptyBuffers; // producer’s constr.
-----------// if 0, nowhere to put more coke
```
### 6.3 Solution
```
Class CokeMachine{
Semaphore new mutex(1); // no one using machine
Semaphore new fullBuffers(0); // initally no coke!
Semaphore new emptyBuffers(numBuffers);
-----------// initially # empty slots
-----------// semaphore used to count how many
-----------// resources there are
Produce(Coke *coke){
--emptyBuffers.P(); // check if there is space
-------------------// for more coke
--mutex.P();--------// make sure no one else
-------------------// using machine
```
put 1 coke in machine
mutex.V(); // OK for others to use
// machine
fullBuffers.V(); // tell consumers there is
// now a coke in machine
+
Coke *Consume(){
fullBuffers.P(); // check if there's a coke
mutex.P(); // make sure no one else
// using the machine
coke = take a coke out
mutex.V(); // next person's turn
emptyBuffers.V(); // tell producer we're
// ready for more
return coke;
+
+
6.4 Questions
Why does producer P and V different semaphores than consumer?
Is order of Ps important?
Is order of V's important?
What if we have 2 producers or 2 consumers? Do we need to change anything?
7. **implementing semaphores**
last time: implement locks by turning off interrupts (or test&set)
Question: how would you implement semaphores? (let's solve problem with the “turning off interrupts” technique:
Here was lock code:
```cpp
member variables:
____ int value
____ queue *queue;
Lock::Lock()
____ value = FREE;
____ queue = new Queue();
Lock::Acquire()
____ disable interrupts
____ if (value == BUSY)
put thread’s TCB on queue of threads waiting for lock
____ switch
else
____ value = BUSY
____ enable interrupts
Lock::Release()
____ disable interrupts
if anyone on wait queue{
____ take a waiting thread’s TCB off queue
____ put it on ready queue
else
____ value = FREE;
____ enable interrupts
```
Fill in the semaphore code:
Member variables:
Semaphore::Semaphore() // constructor
Semaphore::P()
/**
* Thread that calls P() should wait for the
* semaphore to become positive and then
* decrement it by 1
*/
Semaphore::V()
/**
* A thread that calls V() should increment
* the semaphore by 1, waking up a thread
* waiting in P() if any
*/
8. Problems with semaphores/Motivation for monitors
Semaphores a huge step up—just think of trying to do bounded buffer problem with just loads and stores—(busy waiting?)
**3 problems with semaphores**
**Problem 1**—semaphores are dual purpose—mutex, scheduling constraints
→ hard to read code
→ hard to get code right (initial values; order of P() for different semaphores, …)
**Problem 2**—Semaphores have “hidden” internal state
**Problem 3**—careful interleaving of “synchronization” and “mutex” semaphores
→ waiting for a condition is independent of mutex locks (to examine shared variables)
→ either cleverly define condition to map exactly to semaphore semantics (e.g., “12 buffers so initialize semaphore to 12” what if you don’t know ahead of time how many buffers?) OR clever code (interleaving mutex V() with check condition P()) OR both
idea of monitor—separate these concerns: use locks for mutex and condition variables for scheduling constraints
philosophy—think about Join() example with producer/consumer. Just one line of code to make it work with semaphores, but need to think a bit to convince self it really works—relying on semaphore to do both mutex (via atomicity) and condition. What happens when you change the code later to, say, give different priorities to different consumers?
9. Monitor definition
monitor – a lock and zero or more condition variables for managing concurrent access to shared data
monitor = shared object -- I'll use these terms interchangeably
NOTE: Historically monitors were first a programming language construct, where the monitor lock is automatically acquired on calling any procedure in a C++ class. (Java does something like this – you can specify that certain routines are synchronized) Book tends to describe it this way.
But you don’t need this – monitors are also a set of programming conventions that you should follow when doing thread programming in C or C++ or Javascript or … (or Modula c.f. Birrell): explicit calls to locks and condition variables
I will teach the “manual” version of monitors (and require that you do things manually on the projects) because I want to make sure it is clear what is going on and why.
9.1 Lock
The lock provides mutual exclusion to the shared data
Lock::Acquire() -- wait until lock is free, then grab it
Lock::Release() – unlock; wake up anyone waiting in Acquire
Rules for using a lock
• Always acquire before accessing shared data structure
• Always release after finishing with shared data
• Lock is initially free
Simple example: a synchronized list
class Queue{
public:
add(Item *item);
Item *remove();
private:
Lock mutex;
List list;
}
Queue::add(Item *item){
mutex.Acquire(); // lock before using shared data
list.add(item); // ok to access shared data
mutex.Release() // unlock after done w. shared data
}
Item *Queue::remove(){
Item *ret;
lock.Acquire(); // lock before using shared data
if (list.notEmpty()) { // something on queue remove it
ret = list.remove();
}
else{
ret = NULL;
}
lock.Release(); // unlock after done
return ret;
}
QUESTION: Why "ret"?
Aside:
If you have exceptions (as in Java), another variation is:
Foo(){
try{
lock.lock();
}
...
return item;
}
finally{
lock.unlock();
}
9.2 Condition variables
How do we change Queue::remove() to wait until something is on the queue? How do we change Queue::add() to bound number of items in queue (e.g., wait until there is room?)
Logically, want to transition to waiting state inside of critical section, but if hold lock when transition to waiting, other threads won’t be able to get in to add things to queue, to reenable the waiting thread
(Recall that for semaphores, we had essentially this problem and we solved it by cleverly doing our "accounting" for synchronization before we grabbed the lock for mutex. This type of subtle reasoning in programs worries me.)
Key idea with condition variables: make it possible to transition to waiting inside critical section, by atomically releasing lock at same time we transition to waiting
**Condition variable**: a queue of threads waiting for something **inside** a critical section
3 operations
Wait() – release lock; transition to waiting; reaquire lock
♦ releasing lock and transition to waiting are atomic
Signal() – wake up a waiter, if any
Broadcast() – wake up all waiters
**RULE**: must hold lock when doing condition variable operations
In lecture, I’ll follow convention: require lock as parameter to condition variable operations. Get in the habit; other systems don’t always require this.
Some will tell you you can do signal outside of lock. IGNORE THEM. This is only a (small) performance optimization, and it is likely to lead you to write incorrect code.
A synchronized queue with condition variables
```cpp
class Queue{
... static const int MAX;
private:
Lock mutex;
Cond moreStuff;
Cond moreRoom;
List list;
}
Queue::add(Item *item){
mutex.Acquire();
while(list.count == Queue::MAX){
moreRoom.wait(&mutex);
}
list.insert(item);
assert(list.count <= Queue::MAX);
moreStuff.signal(&mutex);
mutex.Release();
}
Queue::remove(){
mutex.Acquire();
while (list.count == 0){
moreStuff.wait(&lock); // release lock; go to sleep; require
}
ret = list.remove();
assert(ret != NULL);
moreRoom.signal(&mutex);
mutex.Release();
return ret;
}
```
9.3 Mesa/Hansen v. Hoare monitors
Need to be careful about precise defn of signal and wait
**Mesa/Hansen-style:** (most real operating systems)
Signaler keeps lock, processor
Waiter simply put on ready queue, with no special priority.
(In other words, waiter may have to wait to re-acquire lock)
**Hoare-style:** (most textbooks)
Signaler gives up lock and CPU to waiter; waiter runs immediately
Waiter gives up lock, processor back to signaler, when it exits critical section or if it waits again
Code above for synchronized queuing happens to work with either style, but for many programs it matters which you are using.
With Hoare-style, can change “while” in RemoveFromQueue to “if” because the waiter only gets woken up if item on the list.
With Mesa-style, waiter may need to wait again after being woken up b/c some other thread may have acquired the lock and removed the item before the original waiting thread gets to the front of the ready queue.
This means that as a general principle, you always need to check the condition after the wait, with mesa-style monitors (e.g., use a “while” instead of an “if”)
**Answer: Hansen**
Why (simple): That's what systems have
Why (deeper): That's what is better/right (IMHO)
(1) That's what systems have
(2) more modular -- safety property is local
(3) more flexible
code written to work under Hansen works under Hoare, but not vice versa
(4) spurious wakeups
real implementations (e.g., Java, Posix) say that "cond::wait()" can return if (a) cond::signal() is called, (b) cond::broadcast() is called, or (c) other, implementation-specific situations
Always use while(...) {cv.wait(*lock);}
10. Programming strategy:
(See “Programming with threads” handout for more details)
Goal: Systematic (“cookbook”) way to write easy to read and understand and correct multi-threaded programs
10.1 General approach
1. Decompose problem into objects
object oriented style of programming – encapsulate shared state and synchronization variables inside of objects
Note:
(1) Shared objects are separate from threads
(2) Shared object encapsulates code, synchronization variables, and state variables
**Warning:** most examples in the book are lazy and talk about “thread 1’s code” and “thread 2’s code”, etc. This is b/c most of the “classic” problems were studied before OO programming was widespread, and the textbooks have not caught up
**Hint:** don’t manipulate synchronization variables or shared state variables in the code associated with a thread, do it with the code associated with a shared object.
Point of possible confusion – in Java, Thread is a class, so Threads are objects. An object of a type that inherits from Thread or implements Runnable should never have a member variable that is a Lock or Condition; it should never say synchronized{}. Why? A thread’s state is by definition thread-local state.
Each thread tends to have a “main” loop that accesses shared objects but the thread object does not include locks or condition variables in its state, and the thread’s main loop code does not directly access locks or cv’s.
Locks and CVs are encapsulated in the shared objects.
Why?
(1) Locks are for synchronizing across multiple threads. Doesn’t make sense for one thread to “own” a lock!
(2) Encapsulation – details of synchronization are internal details of a shared object. Caller should not know about these details. “Let the shared objects do the work.”
1A. Identify units of concurrency. Make each a thread with a go() method. Write down the actions a thread takes at a high level.
1b. Identify shared chunks of state. Make each shared thing an object. Identify the methods on those objects – the high-level actions made by threads on these objects.
1C. Write down the high-level main loop of each thread.
Advice: stay high level here. Don't worry about synchronization
yet. Let the objects do the work for you.
Separate threads from objects. The code associated with a thread should not access shared state directly (and so there should be no access to locks/condition variables in the “main” procedure for the thread.) Shared state and synchronization should be encapsulated in shared objects.
Now, for each object:
2. Write down the synchronization constraints on the solution. Identify the type of each constraint: mutual exclusion or scheduling
3. Create a lock or condition variable corresponding to each constraint
4. Write the methods, using locks and condition variables for coordination
### 10.2 Coding standards/style
These are **required standards** in class. See the handout for details!
I taught m/t coding the standard way...
-- I explained locks give mutual exclusion...
-- I explained how condition variables work; how they are related to the shared state; Hoare v. Hansen, ...
Fall 2001 midterm:
- Every program with incorrect semantic behavior violated at least one rule
- >90% of programs that violated at least one rule were “obviously” semantically incorrect (that is, I could see the bug within seconds of looking at the program; there may have been additional bugs…)
- All that violate one rule are \textit{wrong} – they are harder to read, understand, maintain, …
- Since I’ve declared “violating rule is \textit{wrong}”, huge reduction in bugs in exams and projects
Passion for these rules goes deeper. I learned m/t coding the standard way...
These two experiences + this is really important --> I am a zealot...
The rules: (See handout)
1. Always do things the same way
2. Always use monitors (condition variables + locks)
Almost always more clear than semaphores + “always do things the same way”
3. Always hold lock when operating on a condition variable
You signal on a condition variable because you just got done manipulating shared state. You proceed when some condition about a shared state becomes true. Condition variables are useless without shared state and shared state is useless without holding a lock.
4. Always grab lock at beginning of procedure and release it right before return
- Simplifies reading your code (“always do things the same way”)
- If you find yourself wanting to release lock in middle of a procedure, 99% of time code would be more clear if you split it into two procedures
5. Always use
\begin{verbatim}
while(predicateOnStateVariables(...) == true/false){
condition->wait(&lock);
}
\end{verbatim}
} not
if(...) {...
(Where PredicateOnStateVariables(...) looks at the state variables of the current object to decide if it is OK to proceed.)
While works any time if does, and it works in situations when if doesn't. By rule 1, you should do things the same way every time.
If breaks modularity
When you always use while, you are given incredible freedom about where you put the signal()’s. In fact, signal() becomes a hint -- you can add more signals to a correct program in arbitrary places and it remains a correct program!
→ Can determine correctness of signal calls and wait calls locally
6. (Almost) never sleep()
Never use sleep() to wait for another thread to do something. The correct way to wait for a condition to become true is to wait() on a condition variable.
sleep() is only appropriate when there is a particular real-world moment in time when you want to perform some action. If you catch yourself writing {
while(some condition)
sleep();
}, treat this a big red flag that you are probably making a mistake.
I'm sure there are valid exceptions to all of the above rules, but they are few and far between. And the benefit you get by occasionally breaking the rules is unlikely to make up for the cost in your effort, extra debugging and maintenance cost, and loss of modularity.
10.3 Java rules
In some years, we use Java for the project. Java is a modern language with supports for threads from day 1. This is mostly good news. 2 issues:
(1) For production use: Support for some dangerous/undesirable constructs/styles of programming
(2) For teaching: “too much” support for multi-threading → someone can write code that invokes synchronization with our without knowing what’s going on
→ Coding standards for this class
(J1) Do not use synchronized blocks within method
This is a specific incarnation of rule (4) above “Always grab locks at beginning and release at the end”
The following is forbidden:
Foo()
{
...
synchronized(this){
...
}
...
}
Instead, move the synchronized block into its own method.
(J2) Cleanly separate Threads from shared objects
Classes that define Threads (e.g., that extend Thread or implement Runnable) should include per-thread state. They should not include shared state. They should not include locks or condition variables.
The model is threads operate on shared state (picture).
(J3) *For this class* the *synchronized* keyword is forbidden. Instead, explicitly allocate and invoke locks and condition variables.
The purpose of this rule is to make it easier to teach and learn how to think about synchronization.
Example (correct):
class Foo {
SimpleLock lock;
Condition c1;
Condition c2;
public Foo() {
lock = new SimpleLock();
c1 = lock.newCondition();
c2 = lock.newCondition();
...
}
public void doSomething(...) {
try {
lock.lock();
...
while(...) {
c1.a
waitUninterruptably();
}
...
c2.signal();
}
finally {
lock.unlock();
}
}
}
Example (acceptable):
```java
class Foo{
SimpleLock lock;
Condition c1;
Condition c2;
public Foo(){
lock = new SimpleLock();
c1 = lock.newCondition();
c2 = lock.newCondition();
...
}
public void doSomething(...){
lock.lock();
...
while(...){
c1.awaitUninterruptably();
}
...
c2.signal();
lock.unlock();
}
}
```
Example (forbidden for this class; often correct in real world):
class Foo{
public Foo(){
...
}
public synchronized void doSomething(...){
...
while(...){
this.wait();
}
...
this.signal();
}
}
(Note that once you leave this class the above style can be used when an object needs one lock and one condition variable; if you need two condition variables, fall back on the manual version as in this class.)
10.4 D. Example/Basic template:
(1,2) Always use condition variables for code you write.
Be able to understand code written in semaphores. But the coding standard your manager (me) is enforcing for this group is condition variables for synchronization
class Foo{
private:
// Synchronization variables
Lock mutex;
Cond condition1;
Cond condition2;
...
// State variables
public:
Foo::foo()
{
/*
* (#4) Always, grab mutex at start of procedure, release at
* end (or at any return!!). Reasoning: if there is a logical
* set of actions to do when you hold a mutex, that logical
* set of actions should be expressed as a procedure, right?
*/
mutex->acquire()
Assert(invariants hold – shared variables in consistent state)
... invariants may or may not hold; shared variables may be
in inconsistent state
... // (#5) always “while” never “if”
while(shared variables in some state){
assert(invariants hold)
// (#3) Always hold lock when operating on C.V.
condition1->wait(&mutex)
assert(invariants hold)
}
...
... invariants may or may not hold; shared variables may be
in inconsistent state
...
... // (#3) Always hold lock when operating on C.V.
...
Assert(invariants hold)
} mutex->release()
11. }; // Class
12.
13. Rule (#6) (Almost) never sleep()
Sleep(time) puts the current thread on a waiting queue at the timer – only use it to wait until a specific time, not to wait for an event of a different sort
Hint: sleep should never be in a while(…){sleep}
Problems with using sleep:
1) no atomic release/reacquire lock
2) really inefficient (example – cascading sleeps in Aname)
3) not logical
Warning: on the project and on exams, improper use of sleep will be regarded as strong evidence that you have no idea how to write multi-threaded programs and will affect your grade accordingly.
(I make this a point of emphasis b/c this error is so common in past years and easy to avoid.)
Aside: Double checked locking is broken example...
********************
Summary - 1 min
********************
Monitors represent the logic of the program. Wait if necessary, signal if change something so waiter might need to wake up.
mutex->lock
while (need to wait)
cv->wait();
mutex->unlock
mutex->lock
do something so no need to wait
cv->signal();
mutex->unlock
14. Implementing CV
Simple uniprocessor implementation:
class Cond{
private:
Queue waiting;
public:
void Cond::Wait(Lock *lock){
disable interrupts;
readyList->remove(current TCB);
waiting.add(current TCB);
lock->release();
switch();
enable interrupts;
lock->Acquire();
}
void Cond::Signal(Lock *lock){
disable interrupts;
if(waitingnotEmpty()){
TCB enabled = waiting.remove();
readyList->add(enabled);
}
enable interrupts;
}
void Cond::broadcast(Lock *lock){
disable interrupts;
while(waitingNotEmpty()){
TCB enabled = waiting.remove();
readyList->add(enabled);
}
enable interrupts;
}
}
15. Readers/Writers
15.1 Motivation
Shared database (for example, bank balances, or airline seats)
Two classes of users:
Readers – never modify database
Writers – read and modify data
Using a single mutex lock would be overly restrictive.
Instead, want:
many readers at same time
only one writer at same time
15.2 Constraints
Notice: for every constraint, there is a synchronization variable.
This time different types for different purposes.
1) Reader can access database when no writers (Condition okToRead)
2) Writers can access database when no readers or writers (condition okToWrite)
3) Only one thread manipulates shared variables at a time (mutex)
15.3 Solution
Basic structure
Database::read()
check in -- wait until no writers
access database
check out – wake up waiting writer
Database::write()
check in -- wait until no readers or writers
access database
check out – wake up waiting readers or writers
State variables:
AR = 0; // # active readers
AW = 0; // # active writers
WR = 0; // # waiting readers
WW = 0; // # waiting writers
Condition okToRead = NIL;
Condition okToWrite = NIL;
Lock lock = FREE;
Code:
Database::read()
{
startRead(); // first, check self into the system
Access Data
doneRead(); // Check self out of system
}
Database::startRead()
{
lock.Acquire();
while((AW + WW) > 0)
{
WR++;
okToRead.Wait(&lock);
WR--;
}
AR++;
lock.Release();
}
Database::doneRead()
{
lock.Acquire();
AR--;
if(AR == 0 && WW > 0) { // if no other readers still
okToWrite.Signal(); // active, wake up writer
}
lock.Release();
}
Database::write(){ // symmetrical
startWrite(); // check in
accessData
doneWrite(); // check out
}
Database::startWrite(){
lock.Acquire();
while((AW + AR) > 0){ // check if safe to write
// if any readers or writers, wait
WW++;
okToWrite->Wait(&lock);
WW--;
}
AW++;
lock.Release();
}
Database::doneWrite(){
lock.Acquire();
AW--;
if(WW > 0){
okToWrite->Signal(); // give priority to writers
}
else if (WR > 0){
okToRead->Broadcast();
}
lock.Release();
}
Question
1) Can readers starve?
2) Why does checkRead need a while?
3) Suppose we had a large DB with many records, and we want many users to access it at once. Probably want to allow two different people to update their bank balances at the same time, right? What are issues?
The shop has a barber, a barber chair, and a waiting room with NCHAIRS chairs. If there are no customers present, the barber sits in the barber chair and falls asleep. When a customer arrives, he wakes the sleeping barber. If an additional customer arrives while the barber is cutting hair, he sits in a waiting room chair if one is available. If no chairs are available, he leaves the shop. When the barber finishes cutting a customer’s hair, he tells the customer to leave; then, if there are any customers in the waiting room he announces that the next customer can sit down. Customers in the waiting room get their hair cut in FIFO order.
The barber shop can be modeled as 2 shared objects, a BarberChair with the methods napInChair(), wakeBarber(), sitInChair(), cutHair(), and tellCustomerDone(). The BarberChair must have a state variable with the following states: EMPTY, BARBER_IN_CHAIR, LONG_HAIR_CUSTOMER_IN_CHAIR, SHORT_HAIR_CUSTOMER_IN_CHAIR. Note that neither a customer or barber should sit down until the previous customer is out of the chair (state == EMPTY). Note that cutHair() must not return until the customer is sitting in the chair (LONG_HAIR_CUSTOMER_IN_CHAIR). And note that a customer should not get out of the chair (e.g., return from sit in chair) until his hair is cut (SHORT_HAIR_CUSTOMER_IN_CHAIR). The barber should only get in the chair (BARBER_IN_CHAIR) if no customers are waiting. You may need additional state variables.
The WaitingRoom has the methods enter() which immediately returns WR_FULL if the waiting room is full or (immediately or eventually) returns MY_TURN when it is the caller’s turn to get his hair cut, and it has the method callNextCustomer() which returns WR_BUSY or WR_EMPTY depending on if there is a customer in the waiting room or not. Customers are served in FIFO order.
Thus, each customer thread executes the code:
```c
Customer(WaitingRoom *wr, BarberChair *bc)
{
status = wr->custEnter();
if(status == WR_FULL)
return;
bc->wakeBarber();
bc->sitInChair(); // Wait for chair to be EMPTY
// Make state LONG_HAIR_CUSTOMER_IN_CHAIR
// Wait until SHORT_HAIR_CUSTOMER_IN_CHAIR
// then make chair EMPTY and return
return;
}
```
The barber thread executes the code:
```c
Barber(WaitingRoom *wr, BarberChair *bc)
{
while(1){ // A barber’s work is never done
status = wr->callNextCustomer();
if(status == WR_EMPTY){
bc->napInChair(); // Set state to BARBER_IN_CHAIR; return with state EMPTY
}
bc->cutHair(); // Block until LONG_HAIR_CUSTOMER_IN_CHAIR;
// Return with SHORT_HAIR_CUSTOMER_IN_CHAIR
bc->waitCustomerDepart(); // Return when EMPTY
}
}
Write the code for the WaitingRoom class and the BarberChair class. Use locks and condition variables for synchronization and follow the coding standards specified in the handout.
**Hint and requirement reminder:** remember to start by asking for each method “when can a thread wait?” and writing down a synchronization variable for each such situation.
List the member variables of class WaitingRoom including their type, their name, and their initial value
<table>
<thead>
<tr>
<th>Type</th>
<th>Name</th>
<th>Initial Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>mutex</td>
<td>lock</td>
<td></td>
</tr>
<tr>
<td>cond</td>
<td>canGo</td>
<td></td>
</tr>
<tr>
<td>int</td>
<td>nfull</td>
<td>0</td>
</tr>
<tr>
<td>int</td>
<td>ticketAvail</td>
<td>0</td>
</tr>
<tr>
<td>int</td>
<td>ticketTurn</td>
<td>-1</td>
</tr>
</tbody>
</table>
```cpp
int WaitingRoom::custEnter()
{
lock.acquire();
int ret;
if(nfull == NCHAIRS){
ret = WR_FULL;
}
else{
ret = MY_TURN;
myTicket = ticketAvail++;
nfull++;
while(myTicket > ticketTurn){
canGo.wait(&lock);
}
nfull--;
}
lock.release();
return ret;
}
```
```cpp
int WaitingRoom::callNextCustomer()
{
lock.acquire();
if(nfull == 0){
ret = EMPTY;
}
else{
ret = BUSY;
ticketTurn++;
canGo.broadcast();
}
lock.release();
return ret;
}
```
List the member variables of class BarberChair including their type, their name, and their initial value:
<table>
<thead>
<tr>
<th>Type</th>
<th>Name</th>
<th>Initial Value (if applicable)</th>
</tr>
</thead>
<tbody>
<tr>
<td>mutex</td>
<td>lock</td>
<td></td>
</tr>
<tr>
<td>cond</td>
<td>custUp</td>
<td></td>
</tr>
<tr>
<td>cond</td>
<td>barberGetUp</td>
<td></td>
</tr>
<tr>
<td>cond</td>
<td>sitDown</td>
<td></td>
</tr>
<tr>
<td>cond</td>
<td>seatFree</td>
<td></td>
</tr>
<tr>
<td>cond</td>
<td>cutDone</td>
<td></td>
</tr>
<tr>
<td>int</td>
<td>state</td>
<td>EMPTY</td>
</tr>
<tr>
<td>int</td>
<td>custWalkedIn</td>
<td>0</td>
</tr>
</tbody>
</table>
void BarberChair::napInChair()
lock.acquire();
if(state == EMPTY){ // Cust could arrive before I sit down
state = BARBER_IN_CHAIR;
while(custWalkedIn == 0){
barberGetUp.wait(&lock);
}
state = EMPTY
seatFree.signal(&lock);
}
lock.release();
void BarberChair::wakeBarber()
lock.acquire();
custWalkedIn = 1;
barberGetUp.signal(&lock);
lock.release();
void BarberChair::sitInChair()
lock.acquire()
while(state != EMPTY){
seatFree.wait(&lock);
}
custWalkedIn = 0;
state = LONG_HAIR_CUSTOMER_IN_CHAIR;
sitDown.signal(&lock);
while(state != SHORT_HAIR_CUSTOMER_IN_CHAIR){
cutDone.wait(&lock);
}
state = EMPTY;
custUp.signal(&lock);
lock.release();
}
void BarberChair::cutHair()
lock.acquire();
while(state != LONG_HAIR_CUSTOMER_IN_CHAIR){
sitDown.wait(&lock);
}
state = SHORT_HAIR_CUSTOMER_IN_CHAIR;
cutDone.signal(&lock);
lock.release();
void BarberChair::waitCustomerDepart()
lock.acquire();
while(state != EMPTY) { // NOTE: No other cust can arrive until I call call_next_cust()
custUp.wait(&lock);
}
lock.release();
17. Semaphores v. Condition variables
Illustrate the difference by considering: can we build monitors out of semaphores? After all, semaphores provide atomic operations and queuing.
Does this work:
```
Wait() { semaphore->P() }
Signal() { semaphore->V() }
```
No: Condition variables only work inside a lock. If try to use semaphores inside a lock, have to watch for deadlock.
Does this work:
```
Wait(Lock *lock){
lock->Release();
semaphore->P();
lock->Acquire();
}
```
```
Signal(){
semaphore->V();
}
```
Condition variables have no history, but semaphores do have history.
What if thread signals and no one is waiting?
→ No Op
What if thread later waits?
→ Thread waits.
What if thread V’s and no one is waiting?
Increment
What if thread later does P
Decrement and continue
In other words, P+V are commutative – result is the same no matter what order they occur. Condition variables are not commutative. That’s why they must be in a critical section – need to access state variables to do their job.
Does this fix the problem?
```cpp
Signal()
{
if semaphore queue is not empty
semaphore->V();
}
```
For one, not legal to look at contents of semaphore queue. Also, race condition – signaller can slip in after lock is released and before wait. Then waiter never wakes up
Need to release lock and go to sleep atomically.
Is it possible to implement condition variables using semaphores? Yes, but exercise left to the reader!
*Summary* - 1 min
2 types of synchronization
- mutual exclusion
- scheduling/waiting
Semaphore can be used for both (is this good?)
Semaphore operations
- P()
- V()
Note: you can’t ask the value of a semaphore—only can do \texttt{P()} and \texttt{V()}
Semaphore built on same hardware primitives as lock using essentially same techniques
Monitor = shared object = lock + \texttt{[CV]}* + state
|
{"Source-Url": "http://www.cs.utexas.edu/users/dahlin/Classes/439/lectures/C5.pdf", "len_cl100k_base": 8882, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 69817, "total-output-tokens": 10800, "length": "2e13", "weborganizer": {"__label__adult": 0.0004100799560546875, "__label__art_design": 0.0002906322479248047, "__label__crime_law": 0.000293731689453125, "__label__education_jobs": 0.0019216537475585935, "__label__entertainment": 6.693601608276367e-05, "__label__fashion_beauty": 0.0001366138458251953, "__label__finance_business": 0.00013530254364013672, "__label__food_dining": 0.00039505958557128906, "__label__games": 0.000698089599609375, "__label__hardware": 0.0008440017700195312, "__label__health": 0.0003132820129394531, "__label__history": 0.0001996755599975586, "__label__home_hobbies": 0.0001283884048461914, "__label__industrial": 0.0003662109375, "__label__literature": 0.00024247169494628904, "__label__politics": 0.0002446174621582031, "__label__religion": 0.0004940032958984375, "__label__science_tech": 0.0029163360595703125, "__label__social_life": 0.0001481771469116211, "__label__software": 0.002834320068359375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00037384033203125, "__label__transportation": 0.0006742477416992188, "__label__travel": 0.00021135807037353516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37925, 0.0092]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37925, 0.40088]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37925, 0.83633]], "google_gemma-3-12b-it_contains_pii": [[0, 604, false], [604, 2092, null], [2092, 3454, null], [3454, 4231, null], [4231, 5369, null], [5369, 6689, null], [6689, 7288, null], [7288, 8096, null], [8096, 8501, null], [8501, 9765, null], [9765, 10987, null], [10987, 11753, null], [11753, 12979, null], [12979, 13983, null], [13983, 15308, null], [15308, 16133, null], [16133, 17840, null], [17840, 19070, null], [19070, 20384, null], [20384, 21701, null], [21701, 22769, null], [22769, 23005, null], [23005, 23521, null], [23521, 23963, null], [23963, 24831, null], [24831, 25764, null], [25764, 26835, null], [26835, 27624, null], [27624, 28472, null], [28472, 29298, null], [29298, 30139, null], [30139, 32917, null], [32917, 34261, null], [34261, 35844, null], [35844, 36050, null], [36050, 36859, null], [36859, 37695, null], [37695, 37925, null]], "google_gemma-3-12b-it_is_public_document": [[0, 604, true], [604, 2092, null], [2092, 3454, null], [3454, 4231, null], [4231, 5369, null], [5369, 6689, null], [6689, 7288, null], [7288, 8096, null], [8096, 8501, null], [8501, 9765, null], [9765, 10987, null], [10987, 11753, null], [11753, 12979, null], [12979, 13983, null], [13983, 15308, null], [15308, 16133, null], [16133, 17840, null], [17840, 19070, null], [19070, 20384, null], [20384, 21701, null], [21701, 22769, null], [22769, 23005, null], [23005, 23521, null], [23521, 23963, null], [23963, 24831, null], [24831, 25764, null], [25764, 26835, null], [26835, 27624, null], [27624, 28472, null], [28472, 29298, null], [29298, 30139, null], [30139, 32917, null], [32917, 34261, null], [34261, 35844, null], [35844, 36050, null], [36050, 36859, null], [36859, 37695, null], [37695, 37925, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37925, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37925, null]], "pdf_page_numbers": [[0, 604, 1], [604, 2092, 2], [2092, 3454, 3], [3454, 4231, 4], [4231, 5369, 5], [5369, 6689, 6], [6689, 7288, 7], [7288, 8096, 8], [8096, 8501, 9], [8501, 9765, 10], [9765, 10987, 11], [10987, 11753, 12], [11753, 12979, 13], [12979, 13983, 14], [13983, 15308, 15], [15308, 16133, 16], [16133, 17840, 17], [17840, 19070, 18], [19070, 20384, 19], [20384, 21701, 20], [21701, 22769, 21], [22769, 23005, 22], [23005, 23521, 23], [23521, 23963, 24], [23963, 24831, 25], [24831, 25764, 26], [25764, 26835, 27], [26835, 27624, 28], [27624, 28472, 29], [28472, 29298, 30], [29298, 30139, 31], [30139, 32917, 32], [32917, 34261, 33], [34261, 35844, 34], [35844, 36050, 35], [36050, 36859, 36], [36859, 37695, 37], [37695, 37925, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37925, 0.01988]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
8e550f57942f0985c72cd4b7becfb9de95f17a53
|
Automatically Verifying and Reproducing Event-Based Races in Android Apps
Yongjian Hu
University of California
Riverside, CA 92521, USA
yhu009@cs.ucr.edu
Iulian Neamtiu
New Jersey Institute of Technology, NJ 07102, USA
ineamtiu@njit.edu
Arash Alavi
University of California
Riverside, CA 92521, USA
aalav003@cs.ucr.edu
ABSTRACT
Concurrency has been a perpetual problem in Android apps, mainly due to event-based races. Several event-based race detectors have been proposed, but they produce false positives, cannot reproduce races, and cannot distinguish between benign and harmful races. To address these issues, we introduce a race verification and reproduction approach named ERVA. Given a race report produced by a race detector, ERVA uses event dependency graphs, event flipping, and replay to verify the race and determine whether it is a false positive, or a true positive; for true positives, ERVA uses state comparison to distinguish benign races from harmful races. ERVA automatically produces an event schedule that can be used to deterministically reproduce the race, so developers can fix it. Experiments on 16 apps indicate that only 3% of the races reported by race detectors are harmful, and that ERVA can verify an app in 20 minutes on average.
CCS Concepts
• Human-centered computing → Smartphones; • Software and its engineering → Software defect analysis;
Keywords
Google Android; Event-based races; Race verification; Happens-before relation; Event flipping
1. INTRODUCTION
Concurrency has been a perpetual problem in Android apps since the platform’s inception in 2007 [32]. The root of the problem is Android’s event-based programming model, where lack of synchronization between events leads to event-driven races. To find such races, several detectors have recently been developed, e.g., DroidRacer [21], CAFA [18], and EventRacer Android [7] (for brevity, we will refer to the latter as simply “EventRacer” since the scope of this paper is Android apps). They operate in a similar fashion. First, they define a set of Happens Before (HB) rules for Android’s event-driven model. Then, they instrument the Android platform to collect runtime traces and run the app under this instrumentation; the collected traces contain event begin/end/posting and memory read/write information. Finally, they analyze the trace according to the HB model graph. If there exist read/write or write/write operations on the same memory location and these operations are not ordered by HB, the tools report an event-driven race.
However, these tools have several drawbacks: (1) they are prone to false positives, (2) they cannot verify the effect of the race, e.g., is it a benign or harmful race, and (3) they do not give developers a way to reproduce the race. We now discuss these drawbacks and how our approach helps address them.
False positives. Most Android apps use ad-hoc synchronization to protect shared variable access across asynchronous events. Therefore, race detectors can improve their precision by identifying a broad range of synchronization operations, to avoid reporting safe/synchronized access as races. In our experience, even the most precise race detector currently available, EventRacer, is still prone to false positives. EventRacer attempts to filter out false positives by applying a technique called “race coverage” [24] which was previously used for event-driven races in web applications. While race coverage can greatly reduce the false positives rate, it still fails to identify certain categories (types) of false positives. In Section 3 we describe these categories.
Harmful vs. benign races. The second problem with current tools is that for true positives – accesses unprotected by synchronization – they fail to distinguish between benign and harmful races. Our study shows that only a very small portion of reported races are harmful. Previous studies have reached similar conclusions for desktop applications [26]. Since analyzing races requires substantial human effort, an approach with a high rate of false positives or benign races is less likely to be adopted by developers, as it is a high-investment/low-return activity. Thus we argue that we need an automatic race verification tool that can distinguish between benign and harmful races. In Section 3 we define benign races, harmful races, and false positives.
Reproducing races. Current Android race detectors do not help reproduce a race, hence developers have to manually adjust synchronization operations or timing (e.g., set breakpoints via debugger) until the race is reproduced – this is time-consuming and not guaranteed to succeed (in contrast, we provide deterministic replay, which guarantees that a race can be reproduced so the developers can help find and fix the cause of the race).
Our approach. To address these issues, we introduce ERVA (Event-race Reproducer and Verifier for Android)\(^1\), an automated approach and tool for verifying and reproducing event-based races in Android apps. ERVA, described in detail in Section 4, takes as input a report of a potential race, categorizes the race, and uses a suite of techniques to categorize the race into three categories. First, if the race is a false positive, it is reported as such. If the race can be confirmed, it is classified as benign or harmful. To support this classification, we introduce event dependency graphs (EDG) and a novel definition of benign vs. harmful races in Android apps based on state comparison. If the race is harmful, ERVA automatically produces an event schedule that can be used to deterministically reproduce the race, so the developers can study the race, understand its cause, and fix it.
ERVA does not require access to the app source code, but rather relies on dynamic tracking of happens-before (HB) relationships, schedule replay, and “event flipping”. Given an app, ERVA proceeds in two stages. In the first stage, ERVA runs the app in the EventRacer [7] race detector to obtain a set of candidate races (pairs of race events). While the app is running in EventRacer, ERVA records replay information (e.g., UI events, input stream, sensor streams) and synchronization information (e.g., begin/end of thread and synchronization actions, event posting, etc); this information is used in later phases. For each candidate race from the report, ERVA’s post-run analysis will confirm whether the candidate is indeed a race, to distinguish between false positives and true positives.
The second stage examines the true positives to further distinguish between benign and harmful races. ERVA replays executions multiple times using the inputs recorded in the first stage, this time instrumenting the app to record app state. In each of these executions, ERVA “flips” – alternates the ordering of the events to check their side effects, i.e., the effect of flipping on app state (app state includes all the UI view states, shared preferences, file, database, network traffic). If the flipping has no side effect, ERVA categorizes the race as benign, otherwise it is declared as harmful. Since ERVA employs replay, developers have the opportunity to replay the app with those inputs and event schedules that lead to harmful races, to facilitate finding and fixing the cause of the race.
In Section 5 we present a study and evaluation of running ERVA on 16 real-world Android apps. The study found that out of the 260 race reports in these apps, only 8 (that is, 3\%) are harmful. Running ERVA takes about 20 minutes on average per app, which indicates that it is both effective and efficient at verifying and reproducing races.
In summary, our main contributions are:
1. Event dependency graphs and a definition of harmful vs. benign races for Android.
2. A practical tool, ERVA, which analyzes event-driven race reports to distinguish between false positives, benign races, and harmful races.
3. Debugging and fault location support: once the harmful race is confirmed, ERVA displays the event dependency graph as well as the flipped events, and can deterministically replay the app to help developers find the race’s root cause.
\(^1\)Available at http://spruce.cs.ucr.edu/valera/erva.html
2. BACKGROUND: ANDROID AND ITS EVENT MODEL
The Android software stack consists of apps using the services of the Android Framework (AF). Each app runs as a separate process on top of a custom, smartphone version of the Linux kernel. Android apps are typically written in Java and compiled to either Dalvik bytecode that runs in a VM (Android version < 5.0), or directly to native code (Android version ≥ 5.0).
The Android platform is event-driven, with the AF orchestrating the app control flow by invoking user-provided callbacks in response to user and system events. The AF provides support for events, threads, and synchronization. In Android, threads can communicate with each other in two ways: via messages (the most common way) or via shared memory (as in traditional Java applications, used sparsely).
In Android’s concurrency model, every application process has a main thread (also called “UI thread”); only the main thread can access the GUI objects, to prevent non-responsive threads from blocking the GUI. To update the GUI, other (non-main) threads can send messages to the main thread, and the main thread will dispatch these events to the appropriate user interface widgets. Long-running tasks such as network access and CPU-intensive operations are usually run in background threads. When these tasks are finished, the background threads post back messages (we call these messages internal events) together with the data to the UI thread. We now describe the Android threading model and then provide an example of how threads are used in a concurrent app.
Threading Model. The following grammar describes Android thread kinds.
\[
\begin{align*}
\text{Thread} & \quad ::= \text{Looper} \mid \text{Non-looper} \\
\text{Non-looper} & \quad ::= \text{Background} \mid \text{Binder}
\end{align*}
\]
Looper threads are threads with an associated Looper object that confers threads message dispatching capabilities: the thread blocks waiting for messages and when a message comes, it is processed atomically. The main thread is a looper thread.
Background thread is the result of a regular thread fork() which does not register a Looper. Binder thread is created when an app is launched; binders are widely-used for inter-process communication (IPC). Each app holds a binder thread pool. The number of binder threads in the pool is automatically adjusted based on IPC usage.
**Example.** Figure 1 shows a standard Android app that downloads an image. When the user touches the touchscreen, the hardware touch events are delivered to the Window Manager Service (WMS). WMS keeps a record of all the apps' windows, i.e., window coordinates and layers. WMS checks the hardware touchscreen event's coordinates and sends it to the corresponding app. A handler is then invoked on the app's UI thread. The handler traverses the view tree hierarchy and invokes the corresponding view's `onClickListener` method. If the user clicks a button, the handler posts an internal event with the `onClick` action to the UI thread's event queue. The `onClick` action forks a new background thread to download the image, offloading the task from the UI thread. The downloade thread may periodically send back internal events to show the percentage it has downloaded. When the download task is done, the downloader thread will post another event message along with the image to the UI thread. Finally, the UI thread decodes the image and displays it.
**Event Model.** The following grammar describes the Android event model.
\[
\begin{align*}
\text{Event} &::= \text{ExternalEvent} \mid \text{InternalEvent} \\
\text{ExternalEvent} &::= \text{InputEvent} \mid \text{SensorEvent} \mid \text{IPC} \mid \text{HardwareInterrupt} \\
\text{InputEvent} &::= \text{MotionEvent} \mid \text{KeyEvent} \\
\text{SensorEvent} &::= \text{Compass} \mid \text{Accelerometer} \\
\text{InternalEvent} &::= \text{Message} \mid \text{RunnableObject}
\end{align*}
\]
In Android, events can be either external or internal. External events originate in the hardware, cross into the OS and then into the AF. Apps can choose to use default handling for these events, in which case they are handled by the UI thread, or can register custom event handlers in other looper threads. Typical external events include input events (e.g., gesture or key events), sensor events (e.g., accelerometer, compass), IPC and hardware interrupts (such as VSYNC, a hardware "heartbeat" signal invoked 60 times per second). Internal events are messages or runnable objects sent from a non-looper thread to a looper thread. Internal events are created and sent via the `Handler` API at the AF or app level, rather than coming from the OS.
**Event Posting.** Event posting and processing is at the core of the Android platform. We have identified several event posting types. First, external events coming from the hardware or the OS that make a looper thread post messages to itself. For example, when the user clicks a button, the click gesture is actually a series of input events beginning with `ACTION_DOWN` and ending with `ACTION_UP`. In `ACTION_UP`, the UI thread will check which `View` object this event's coordinates are located in. If the `View` object has registered a click handler, the UI thread will post an `onClickListener` event to itself. Second, internal events, created and handled by the same thread — these events are created programmatically in the app code, in contrast to external events which come from the hardware or OS. Third, events (messages) generated by a looper, background or binder thread, and posted to another looper.
### 3. EVENT-BASED RACES: DEFINITION AND EXAMPLES
In this section we first present our model, including the happens-before relationship, which allows us to define true races, benign or harmful, as well as false positives. We then illustrate these on actual race reports in real-world apps.
#### 3.1 Event-based Race Definition
We begin by defining the concurrency model in terms of threads, events, memory locations, and operations on them. ERVA first records per-thread traces of events/operations, then uses a set of rules to establish HB on the traces, and finally classifies race reports into false positives, benign races, and harmful races. We now present the formal definitions that underlie ERVA's analyses.
**Definitions.** In our approach, threads \( t \) can be either loopers \( t^l \) or non-loopers \( t^{nl} \). For each thread we record a trace. For looper threads, their traces \( \tau(t) \) contain sequences of events \( e \). For looper threads, their traces \( \tau(t) \) contain sequences of operations \( op \). Operations \( op \) can be memory accesses \( \alpha \) (which capture the location \( \rho \) and access kind \( \tau \) reads or \( \tau \) writes); thread operations \( \gamma \), for example, \( fork(parent_{tid}, child_{tid}) \) or \( join(parent_{tid}, child_{tid}) \); or event postings \( \beta \). Event postings create new events \( e \) (event types were defined in the "Event Model" part of Section 2) by either sending a message \( m \) or posting a runnable object \( r \) to looper thread \( t \) with a time delay \( \Delta \).
**Happens-before relationship.** Existing event-based race detectors [7, 18, 21] have proposed various HB definitions (\( \prec \)). We now proceed to define HB as a set of rules tied together by transitivity.
**Program order rule:** if an operation \( op_2 \) precedes another operation \( op_1 \) on the same thread in the trace, then they follow program order \( op_1 \prec_{op} op_2 \). Program order on non-looper threads implies HB, i.e., \( op_1 \in t^{nl} \wedge op_2 \in t^{nl} \wedge op_1 \prec_{op} op_2 \Rightarrow op_1 \prec op_2 \), but not on looper threads. Rather, HB on looper threads can only be introduced by the looper atomicity rule, discussed next.
**Looper atomicity rule:** the order of operations executed within one event establishes HB; that is, if \( op_1 \in e_1 \wedge op_2 \in e_2 \wedge op_1 \prec_{op} op_2 \), then \( e_1 \prec e_2 \) and \( op_1 \prec_{op} op_2 \). Event order rule: \( e_1 \prec e_2 \) if \( end(e_2) \prec begin(e_1) \). Event post rule: new events can be posted from looper threads \( t^l \) or non-looper threads \( t^{nl} \). For the former case, say \( \beta = post(e_2, t^l, m \mid r, \Delta) \wedge \beta \in e_1 \wedge e_1 \in t^l \), i.e., event \( e_1 \) posts an event \( e_2 \) to the looper thread \( t^l \) that \( e_1 \) belongs to, then \( e_1 \prec e_2 \). For the latter case, say \( \beta = post(e, t^l, m \mid r, \Delta) \wedge \beta \in e \wedge e \in t^{nl} \), i.e., an event \( e \) is posted by a non-looper thread, then \( \beta \prec e \) and \( \forall \alpha \in t^{nl} \wedge \alpha \prec \beta \) we have \( \alpha \prec e \).
Thread rule: if thread \( t_i \) creates a new thread \( t_j \) (\( \gamma = f o r k(t_i, t_j) \)), then \( \forall a_i \in t_i \) we have \( \gamma \prec a_i \). Similarly, for a thread join \( \gamma = j o i n(t_i, t_j) \), we have \( \forall a_a \in t_j \Rightarrow a_a \prec \gamma \).
External event rule: in our model, each external event sensor \( s_i \) has a \( b e g i n(s_i, \theta) \) and \( e n d(s_i, \theta) \) where \( \theta \) is the sequence number. The external events within the boundary of \( b e g i n(s_i) \) and \( e n d(s_i) \) are ordered by HB. For example, a click operation is a series of external events from the touch-screen starting with \( A C T I O N _ { D O W N} \), many \( A C T I O N _ { M O V E S} \), and ending with \( A C T I O N _ { U P} \). Here the \( b e g i n(s_i, \theta) \) is \( A C T I O N _ { D O W N} \) and \( e n d(s_i, \theta) \) is \( A C T I O N _ { U P} \). All the external events \( e_1, e_2, \ldots, e_n \) within this boundary follow HB order. However, if \( e_1 \) and \( e_2 \) are from two different sequences, then there is no strict HB order. An example is two click operations that could be triggered in alternative order.
Android component lifecycle rule: callbacks in different components such as \( A c t i v i t y \), \( S e r v i c e \), \( F r a g m e n t \), \( W i v e \), etc. are ordered by HB. For instance, \( A c t i v i t y \)’s \( o n C r e a t e \) callback is always invoked before its \( o n D i s t r o y \). Based on Android’s documentation [2], a lifecycle graph [8] can be built to precisely describe the HB relation between Android component callbacks.
Transitivity: HB is transitive, that is \( \alpha_1 \prec \alpha_2 \land \alpha_2 \prec \alpha_3 \Rightarrow \alpha_1 \prec \alpha_3 \) and \( \alpha_1 \prec \alpha_2 \land \varepsilon_1 \prec \varepsilon_2 \land \varepsilon_2 \prec \varepsilon_3 \Rightarrow \varepsilon_1 \prec \varepsilon_3 \).
Event-based races. We can now state the race definition. We say that event \( e_i \) \( r a c e s \) with event \( e_j \) if there exists a shared variable \( \rho \) such that \( \alpha_i(\rho) \in e_i, \alpha_j(\rho) \in e_j \) and \( e_i \neq e_j \). On occasion we will refer to the event pair that satisfies this definition as “racy”.
False positive. We define as false positive a race reported between \( e_i \) and \( e_j \) by a race detector (e.g., EventRacer) whereas ERVA can establish that the events are actually ordered by HB, i.e., either \( e_i \prec e_j \) or \( e_j \prec e_i \).
Access influence. Let \( \alpha_1 = \alpha_{+1}(P_1) \) and \( \alpha_2 = \alpha_{+2}(P_2) \). We say that access \( \alpha_1 \) influences access \( \alpha_2 \) (denoted \( \alpha_1 \rightarrow \alpha_2 \)) if executing \( \alpha_1 \) leads to a different value for \( P_2 \) compared to omitting \( \alpha_1 \).
Benign race. We say that two events \( e_i \) and \( e_j \) have a benign race if they have an event-based race (which we defined above) on at least one location \( \rho \) but \( \forall \alpha_i \in e_i, \exists \alpha_j \in e_j, \alpha_i(\rho) \neq \alpha_j(\rho) \). That is, the different order of executing \( \alpha_i \) and \( \alpha_j \) does not affect the externally visible state (EVS). EVS can be customized by the user; ERVA’s default EVS definition is presented in Section 4.6.
Harmful race. We define as harmful a race where event execution order influences program state and this is reflected into the EVS. More precisely, we say that two events \( e_i \) and \( e_j \) have a harmful race if they have an event-based race on at least one location \( \rho \) and \( \exists \alpha_i \in e_i, \exists \alpha_j \in e_j, \alpha_i(\rho) \neq \alpha_j(\rho) \). Harmful races can have various consequences, e.g., crash, exception, erroneous GUI state; we provide examples of harmful races in real-world apps in Section 5.1.
### 3.2 False Positive Type-1: Imprecise Android Component Model
False positives may arise due to imprecise modeling of the Android components and their interaction. Figure 2 shows an example: a race reported by EventRacer in AnyMemo (a flashcard app) that is actually a false positive. The \( R e c e n t L i s t \) \( F r a g m e n t \) is a subclass of Android’s \( F r a g m e n t \) component. In the \( o n R e s u m e() \) callback, the app performs a database query and updates the recent list \( ri \) to the fragment views. Since the database access is time-consuming, to make the app responsive, the database query is performed by a background task (lines 14–16). When the task is done, a callback will be posted to the main thread, and the main thread updates the UI (lines 18–22).
The race detector reports a race between \( o n C r e a t e V i e w() \) and \( o n R e s u m e() \) callbacks. Due to imprecise modeling, the race detector cannot find any HB relation between these callbacks. Hence, since \( o n C r e a t e V i e w() \) writes the \( m A d a p t e r \) variable and \( o n R e s u m e() \) reads the same variable, a read-write race is reported.
However, this race is actually a false positive. According to the Android documentation, a Fragment’s \( o n C r e a t e V i e w() \) method is always invoked before its \( o n R e s u m e() \) method. Thus this read-write race can never happen.
### 3.3 False Positive Type-2: Implicit Happens-before Relation
Another category of false positives is due to imprecise modeling of happens-before relationship. Figure 3 shows an example of FP caused by implicit HB relation in Cool Reader, an eBook reader app. EventRacer reports that the callbacks onRecentBooksListLoaded and getOrLoadRecentBooks have a race condition because they both access the mBooks shared object, but the tool cannot derive any HB relation between these two callbacks. The CoolReaderActivity is an instance of an Android Activity subclass, i.e., a separate screen. Its lifecycle starts with the \( o n S t a r t() \) callback invoked on the main thread. In \( o n S t a r t() \), the app first starts the database service CRDBService. If the service starts successfully, a Runnable callback will be posted back to the main thread indicating that the database service is ready. The callback first tries to
---
**Figure 2:** False positive type-1 in the AnyMemo app.
```java
1 public class RecentListFragment extends Fragment {
2 private ArrayAdapter mAdapter = null;
3 private Handler mHandler = null;
4
5 @Override
6 public View onCreateView(View...)
7 mHandler = new Handler();
8 mAdapte=new ArrayAdapter(...);
9}
10
11 @Override
12 public void onResume() {
13 Thread thread = new Thread() {
14 public void run() {
15 // query database operations
16 mHandler.post(new Runnable() {
17 public void run() {
18 mAdapter.clear();
19 for (RecentItem i database) {
20 mAdapter.insert( ri );
21 }
22 }
23 });
24 }
25 }
26 thread.start();
```
load some history records by invoking `loadFromDB()`, then creates a new `CRRootView` object. These two methods will both post callbacks (detailed implementation omitted).
Note that the `loadFromDB` and `CRRootView` initialization methods are invoked in the same action on the Looper thread (i.e., the main thread). According to the looper atomicity rule, `loadFromDB` happens before `CRRootView`; in other words, the calls `loadFromDB` and `CRRootView` are in program order. In the implementation of these two methods, they both use the `Handler.post(Runnable r)` to post callbacks. The `post` method inserts the actions into the queue in FIFO order. Since `loadFromDB` posts the callback before `CRRootView`, the callback `onRecentBooksListLoaded` will always happen before `getOrLoadRecentBooks()`. However, EventRacer misses this implicit HB relation and thus reports a race, which is a false positive in this case.
### 3.4 Benign Race Type-1: Control Flow Protection
We now discuss a benign race in Volley, a popular HTTP library [4]. Figure 4 shows the relevant source code. EventRacer reports a race on the `mRunnable` object. The method `batchResponse` on line 4 and the creation of the `Runnable` object on line 6 are two distinct actions executed on the main thread. On line 6, the `mRunnable` object is updated to point to a new `Runnable` object while on line 9 it is set to `null`. Since EventRacer does not capture the HB relation between these two actions, it reports a write-write race, but it is a benign race.
The `null` test on line 5 can be true or false depending on when the next `batchResponse` is executed. Usually, the `Runnable.run()` is executed before the next `batchResponse`, the `mRunnable` will be set to `null` (line 9) hence in the next `batchResponse` a new `Runnable` (line 6) is created and posted to the main thread’s looper (line 12). However, in cases when there are multiple `batchResponse` actions queued and executed before the `Runnable.run()`, the check on line 5 sees that `mRunnable` is already non-`null`, takes the `else` branch and does nothing. Thus the order in which the `batchResponse` and `Runnable` are executed does not matter due to the control flow protection offered by the `if` on line 5. This race is classified as benign.
### 3.5 Benign Race Type-2: No State Difference
Figure 5 shows an example of benign race type-2 in the AnyMemo app. When `QACardActivity` is launched, it will create several loaders to load data from the database or configuration files. In the `startLoading` method, the global variable `runningLoaderCount` tracks how many active loaders are currently running. When the loader finishes, it will post an `onLoadFinished` callback to the main thread and invoke the `checkAllLoaderCompleted` method. In this method, the variable `runningLoaderCount` is first decreased; if `runningLoaderCount <= 0`, it will invoke the `onAllLoaderComplete` callback to inform that all the loaders have finished their job.
Since the time spent in the loader is unpredictable, the order of these two `onLoadFinished` callbacks executed on the main thread is not deterministic. The race detector reports this pair of callbacks as a race because it cannot find any
HB relation and these callbacks do write to the same object, \texttt{runningLoaderCount}. Although this reported race is a true positive, it is actually harmless, because app state does not depend on the order in which the callbacks write to \texttt{runningLoaderCount}. ERVA can flip the execution order of the callbacks and does not find any harmful effect (EVS difference). Thus this race is classified as benign.
4. APPROACH
Figure 6 shows an overview of our approach; it consists of two phases, a race detection phase and a race verification phase. Note that ERVA relies on dynamic analysis and instrumentation hence the app source code is not required. In the race detection phase, we run the app on an instrumented platform; traces described in Section 3.1 are collected at this stage. The platform’s instrumentation consists of three main modules. First, a publicly-available custom Android emulator\(^3\) with EventRacer running on top of it. Second, an input capture module provided by the VALERA [19] record-and-replay tool. Third, an event capture part, shown in thicker black line and font as it is a contribution of this work, unlike EventRacer and VALERA which are off-the-shelf tools. EventRacer runs the app and produces a race report. ERVA saves the instrumentation results in an input log and EDG, respectively. With these logs at hand, we proceed to the race verification phase.
In the verification phase, we replay the execution multiple times, flipping event order. The platform used for this phase can either be the emulator or an actual Android phone. Using the input log collected during detection, we use the input replay support of VALERA to ensure that the input provided to the app in this phase is the same as during detection (record). Due to event flipping we have multiple executions; we capture app state from each execution and then use app state comparison to classify potential race as either false positive, benign race, or harmful race. We now provide details on each of these components.
4.1 Race Detection
We choose EventRacer [7] as the race detector in ERVA as it is publicly available and robust. Compared with CAFA [18] and DroidRacer [21], EventRacer’s HB model is more precise, while its race reports are easy to parse.
4.2 Input Capture and Replay
To capture app input for subsequent replay, we leverage VALERA [19], a tool that can record and replay app input, e.g., touchscreen, network, GPS, etc. VALERA can also capture and replay event schedules, but it does so “blindly” — it does not capture event dependencies that are instrumental for this work.
**IPC Events.** Android apps use IPC heavily, for isolation reasons. For instance, the Input Method Manager Service (IMMS) uses IPC: when the user inputs text into the current window, the soft keyboard is actually a global server service instead running in the app’s address space. The IMMS receives the inputs from the user and dispatches them to the currently active window via IPC calls.
In Android, IPC is carried out via the Binder mechanism: each app has several Binder threads to handle incoming IPC calls. ERVA records the data and timing of each Binder transaction. The data contains two parts: primitive data and reference data. Reference data such as Binder object references and OS file descriptors are not deterministic across different runs. For primitive data, ERVA saves their concrete content while for reference data ERVA just keeps a slot in the log. The concrete value is filled in during the real execution.
**Threads and Event Posting.** ERVA intercepts thread and event operations to capture the HB relationships defined in Section 3.1. Android apps achieve asynchronous programing via message posting. When the background threads need to update the UI, they post messages (events) to the Looper on UI thread. As described Section 2, there are three types of event posting. ERVA captures these message by recording relevant API such as \texttt{Handler.sendMessage} and \texttt{Handler.post}.
4.3 Handling I/O Non-determinism
Besides event schedule non-determinism, I/O is another source of non-determinism. Since we use app state comparison to classify benign and harmful races, it is crucial to eliminate I/O non-determinism because it can affect app state and lead to divergence between different replays of the same execution. ERVA leverages VALERA [19] to make I/O input deterministic by recording and replaying I/O from a variety of sources: file system operations, network input, GPS, microphone, camera, random number API, etc.
4.4 Event Dependency Graph
By capturing the external and internal events and their posting, ERVA builds an event dependency graph (EDG). Figure 7 illustrates EDGs by showing an excerpt from the actual EDG of the TomDroid app. Each edge in the graph describes the causal relationship between events. For example, in Figure 7, the user performs two interactions with the app. First, the user clicks the ‘Sync’ button on the ViewNoteActivity. The onClick listener will create a background thread that performs a long-running task (data synchronization with the cloud). When the SyncThread is successfully created, the app will show an animation indicating that the task is running in the background. After the task is finished, the SyncThread will post an internal message to the UI thread to stop the animation. When the user is notified that the Sync task is done, she can use the ‘Back’ button to go back to the previous activity. The Back button press will trigger the onBackPressed() handler. The ViewNoteActivity sends an IPC binder transaction to inform the Activity Manager Service (AMS) to finish the current activity. AMS handles this transaction and then sends back an IPC to the app telling the UI thread to switch to the NoteListActivity. This activity contains updated content hence the user sees an updated screen.
The EDG precisely describes each event transition. Using the EDG, ERVA knows the root cause of a particular event. For instance, the ViewNoteActivity.updateUI() is triggered because the user has clicked the ‘Sync’ button and this event will create another background thread. During replay, an event can be ready to replay if and only if its recorded preceding event has been replayed. This is controlled by ERVA’s underlying scheduler which will be described next.
4.5 Event Flipping
In Android, each Looper has an associated Message Queue. The Looper runs in an infinite loop, waiting for incoming
\(^3\)http://eventracer.org/android/
events and placing them in a queue. Messages (i.e., events) are dispatched by invoking the event handler’s callback. We changed the Looper implementation to support flipping “racy” (unordered by HB) pairs of events, as follows. During replay, ERVA retrieves all recorded events (VALERA saves those to support event replay). Whenever our modified Looper receives an event, it checks whether this event is executable according to the EDG. If all the precedent events in the EDG have been executed, the message will be dispatched to the handler as usual. Otherwise, the event is postponed (added to a “pending” queue) because it could potentially be flipped.
For example, in Figure 7, the ViewNoteActivity.updateUI() and NoteListActivity.onResume() do not have an HB relation according to the race detector, which means their execution order can be flipped. To flip the event, ERVA adds a “fake” dependency edge in the EDG as shown in Figure 8. During replay, the updateUI event handler comes before the Back Key handler, but this time updateUI cannot be executed because it has a preceding event (NoteListActivity.onResume) in the EDG. Thus the event is added to the pending queue. When the NoteListActivity is finished, the looper scheduler notices that onResumed has a succeeding edge in the EDG, i.e., updateUI. The scheduler then inspects the pending queue, finds updateUI and allows it to execute. To summarize, this strategy guarantees that the order of events is flipped compared to the original (record) order.
### 4.6 State Recording and Comparison
Replay-based race classification has been used in prior tools [5, 26]: it starts from an execution that experienced one ordering of the racing accesses and re-runs it while enforcing another ordering, then it compares the states of the program to check whether the race is benign or harmful. The main problem of these tools is that, by using instruction-level deterministic replay their overhead is too high and would not work for Android apps for several reasons. First, Android devices usually have limited resources of computation and storage. Second, whole-system instruction-level replay would be difficult on mobile devices without hardware changes. Third, Android apps are sensitive to timing: large slowdown is likely to incur ANR (Android No Respond) errors. Fourth, Android’s UI gestures are very sensitive to input timing and large overhead may change gesture semantic leading to replay divergence [15, 19].
We define Externally Visible State (EVS) as the subset of app state that might be accessed, or viewed, by the user; in ERVA the EVS consists of GUI objects (layouts, views, images) and Shared Preferences (a system-wide key-value store where apps can save private or public data [3]). The extent of the EVS can be customized by ERVA users. However, for this work we decided to limit the EVS to just the GUI and Shared Preferences for two reasons: (1) capturing more state e.g., file contents, would incur higher overhead and lead to spurious differences; and (2), Android event-race bugs tend to manifest as GUI differences or crashes [21, 18, 24]. Hence instead of recording and comparing whole-memory contents, ERVA finds state differences (hence harmful races) via EVS snapshot differencing, as follows: (1) in the original event order execution, ERVA snapshots the EVS upon entering or leaving each activity into EVS_original; (2) likewise, ERVA snapshots the EVS after the event order is flipped, into EVS_alternate; and (3) ERVA compares EVS_original and EVS_alternate to find differences—a benign true should show no difference, that is, the user cannot tell the difference between the original and alternate executions. Note that some differences might still exist in hidden state, e.g., memory contents or the VM stream, but these differences are not our focus — in our experience, many are spurious — rather, we expose those races that lead to visible EVS differences. In additions, ERVA allows the EVS definition (i.e., its extent) to be customized by the user.
### 4.7 Race Verification
As described in Section 3, ERVA classifies race reports into five bins: two types of false positives, two types of benign races, and harmful races. We now describe how ERVA performs this classification.
**False positives type-1** occur because race detectors do not model the app’s lifecycle callback events (or do not model them precisely). The result of event flipping is deadlock because an event with logical timestamp 1 cannot happen before another event with logical timestamp 2. Once ERVA detects deadlock after flipping the events, we bin this report...
---
Figure 6: Overview of ERVA.
as false positive type-1.
For false positive type-2, the cause is missing implicit HB relations. ERVA detects this type of FP by analyzing the EDG. For example, for onRecentBooksListLoaded and getOrLoadRecentBooks (the racy pair of events in the Cool Reader example from Section 3.3) the EDG shows that the event posters are within the save event and are ordered by program order. Since the Handler.post(Runnable) follows the FIFO property, these two events cannot be flipped. Note that, had one of the posters used postDelayed(Runnable r, long time), the events would be flippable.
For benign race type-1, memory accesses are protected by the control flow. ERVA first tries to flip the racy pair and finds the events can be flipped. Then, during event execution, ERVA enables tracing of instructions to detect reads and writes protected by control flow. In the example shown in Figure 4, the access of mRunnable is protected by the branch condition in line 5. By analyzing the instruction trace of flipped events, ERVA bins such reports as benign race type-1.
For benign race type-2, ERVA flips the order of events and does find that the memory value is different after flipping, thus this is a race. Next, ERVA dumps the state of the app (Section 4.6) and finds no difference. Thus ERVA considers this as a benign race type-2.
5. EVALUATION
We now describe our experimental setup, then evaluate the effectiveness and efficiency of ERVA.
Environment. The race detector used in our experiments is the publicly-available EventRacer for Android [1]. ERVA is based on Android version 4.3.0. All the experiments were conducted on the Android emulator on top of an 8-core 24GB desktop machine running 64-bit Ubuntu 14.04.2 LTS.
We have evaluated ERVA along two dimensions: (1) effectiveness in verifying races and (2) efficiency, i.e., the time required to process an app.
App dataset. We ran ERVA on 16 real-world apps (column 1 of Table 1). These apps were chosen according to several criteria: (a) spanning various categories, from note-taking to flashcards to news and utilities; (b) reasonable popularity — column 2 shows the number of downloads, in thousands, according to Google Play, all but two apps have at least 10,000 downloads while five apps have in excess of 1 million downloads; and (c) nontrivial size — column 3 shows their bytecode size, in KB.
5.1 Effectiveness
An effective race verification and reproduction tool should support developers in triaging reported races and allowing them to focus on true races, in particular on harmful races — this helps bug finding and fixing. We quantified ERVA’s effectiveness on the 16 aforementioned apps.
We present the results of both EventRacer and ERVA in Table 1. The first set of grouped columns (columns 4–6) summarize EventRacer’s output: the number of race reports and its breakdown as high or normal priority. For example, for the CoolReader app, EventRacer reports 35 potential races; of these, 15 were high priority and 20 were normal priority. Presumably, the developer would proceed by trying to confirm the 15 high-priority races and then move on to the
4 EventRacer classifies a report as high priority if the race is in app code and as medium priority if it is in the AF but invoked from the app.
remaining 20 normal-priority races – this is likely to be quite
time-consuming. The last two rows show totals and percentages
across all apps: out of 260 race reports, 74 (28.5%) are
high priority while 186 (71.5%) are normal priority.
The remaining columns (7–10) summarize ERVA’s output:
the number of false positives, true positives, and among the
latter, how many of the races were benign or harmful. For
example, in the CoolReader app, ERVA has found that of the
35 reports produced by EventRacer, 20 were false positives
and 15 are true positives; however, none of the true positives
were harmful races.
The last two rows, which show totals and percentages
across all apps, reveal that out of 260 race reports, 74 (28.5%)
were false positives, 186 (71.5%) were true positives, and the
71.5% true positives were split as 68.5% benign, 3% harm-
ful. Note that harmful races make up only 3% of the total
number of race reports, which underscores the importance
of race verification. We now discuss harmful races in detail.
**Harmful races.** Since ERVA offers deterministic replay,
the schedules that expose the harmful races can be replayed,
which helps developers find and fix the root cause of the race.
We used this facility to manually confirm that the 8 races
reported by EventRacer were indeed harmful. Harmful
races manifest in various ways. For example, some harm-
ful races crash the app. In the TomDroid example discussed
in Section 4.4, if the SyncThread and BACK key events are
flipped, the app will crash due to a null pointer exception.
Even if the app does not crash, the consequences can still be
deleterious. For example, AnyMemo has a harmful race
that leads to an exception and different GUI state, and is
called by ERVA’s state differencing. An excerpt5 of the
relevant code is shown next.
```java
try {
// get data from database
adapter.insert(db.getData());
catch (Exception e) {
Log.e("Exception Maybe caused by race condition. Ignored.");
} catch (NullPointerException e) {
Log.e("Exception Maybe caused by race condition. Ignored.");
}
```
If the event race occurs, the adapter object may be ini-
ialized improperly and its dereference will cause a Null
PointerException. Interestingly, the developers are aware of
the race but they simply use a try ... catch to handle the
exception hence mask the effect of the bug. ERVA detects
this bug via state differencing and reports that the race will
cause a difference in the state of the View object.
Hence ERVA is effective at helping developers verify their
apps, as well as find and fix races.
### 5.2 Efficiency
Since ERVA consists of detection and verification phases,
we measured the time for each phase. We present the results
in Table 2, individually for each app and the average across
all apps in the last row. Recall that in the detection phase
we run each app on an instrumented platform – we call this
“online time” (column 2) and it takes on average 34 seconds
per app. Following the online stage, EventRacer performs
an offline analysis (column 3) which takes on average 54
seconds per app.
The time for the verification phase is presented in column
4: on average 1,111 seconds. This is due to ERVA having
to perform multiple executions to flip events and compare
state; we believe this time can be reduced substantially by
using checkpointing to only replay program regions rather
than whole executions [29], an idea we leave to future work.
Finally, in the last column we present the total time (the
sum of the detection and verification phases) for each app: it
ranges from 245 to 3,297 seconds (1,198 seconds on average),
which we believe is acceptable.
### 6. RELATED WORK
**Race Detection.** Race detection has been widely studied.
Prior efforts have used either static [28, 12] or dynamic [13,
analysis to detect races. However, these efforts have mainly focused on detecting multi-threaded data races in applications running on desktop or server platforms. In Android, event-driven races are 4x-7x more numerous than data races [18, 21]. Moreover, techniques geared at desktop/server programs can be ineffective for detecting event-based races. For example, traditional dynamic race detectors assume that instructions executed on the same thread have program order. However, this is not true for mobile or web applications. These apps adopt an asynchronous programming model and events handled by one thread come in non-deterministic order. Recent works have looked at detecting event-driven race patterns. For example, EventRacer [6, 24] detects event-driven races in web applications while EventRacer Android [7], CAFA [18] and DroidRacer [21] focus on Android apps. As mentioned in Section 1, these tools suffer from high false positive rates, cannot distinguish between benign and harmful races, and cannot reproduce races; these drawbacks are the main impetus for our work.
Race Classification. Race detectors that support race classification and prioritization are more likely to be adopted by developers, because developers can decide how to prioritize investigating race reports. Narayanasamy et al. [26] use instruction-level record-and-replay to replay alternate schedules, then compare the register/memory state to classify the races. Kasikci et al. [5] apply symbolic execution to classify the consequences of races by comparing their symbolic output result. However, both works focus on multi-threaded desktop/server apps; this approach is not suitable for mobile applications because of their event-driven nature. In contrast, ERVA captures and flips events according to the event dependency graph, rather than altering thread scheduling.
Model Checking. Model checking can help systematically explore all the nondeterministic schedules to find concurrency bugs. R4 [20] aims to find event-driven races in web applications; it uses dynamic partial order reduction (DPOR) [14] and bounded conflict-reversal to limit the total number of schedules to explore. Similarly, AsyncDroid [23] uses delay-bounded prioritized systematic exploration of the recorded schedule to find concurrency errors in Android apps. Unlike these model checking techniques which target finding new buggy schedules, ERVA checks only the potential racy events reported by the race detector and aims to verify whether they are false positives or harmless races. Thus, ERVA cannot detect bugs in unexplored schedules. R4 can check harmless races due to ad-hoc synchronization, but directly applying it to Android seems problematic: Android provides a number of system callbacks that have implicit happens-before relations, and ignoring these callbacks could cause false positives as the example of false positive type-1 shows. ERVA can check this type of false positives by flipping the events to see whether the system enters a deadlock condition. ERVA and model checkers could be combined. For example, the EDG from ERVA can be used as an auxiliary model for R4 and AsyncDroid in their exploration algorithm to check whether the new schedules are feasible or not. Furthermore, the EVS can be used to check the harmfulness of newly explored schedules.
Record and Replay. Record-and-replay has been widely studied and implemented on various platforms. On desktop/server platforms, replay approaches can be categorized into 3 groups: hardware modification [30, 22], virtual machine instrumentation [11, 27], and software-based [16, 31, 25, 10]. However, none of these approaches can be applied to mobile apps, because that would entail either changing the underlying mobile hardware (which is unrealistic) or the VM (which entails high overhead); and software-based approaches do not capture sufficient or suitable information for replaying Android apps due to their asynchronous nature. On the smartphone platform, tools such as Reran [15] and Mosaic [17] support replaying GUI events, but not schedules. Reran is device-dependent while Mosaic, like ERVA, is device-independent. Our own prior work, VALERA [19], supports schedule replay, and we use that support in ERVA. However, VALERA neither uses EDGs, nor can it flip events.
### 7. CONCLUSIONS
We have presented ERVA, an approach and tool for automatically verifying and reproducing event-based races in Android apps. ERVA addresses the imprecisions in current race detectors for Android by precisely modeling events and their dependencies, which allows it to categorize race reports and only point out those reports that are definite, harmful races. Experiments on 16 Android apps show that most races reported by race detectors are false positives or benign races, and that ERVA is an effective and efficient approach for automatically triaging and reproducing races.
### Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. CNS-1061466. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
8. REFERENCES
|
{"Source-Url": "http://www.cs.ucr.edu/~yhu009/papers/issta16hu.pdf", "len_cl100k_base": 11762, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 45205, "total-output-tokens": 14832, "length": "2e13", "weborganizer": {"__label__adult": 0.0002765655517578125, "__label__art_design": 0.0002294778823852539, "__label__crime_law": 0.0002570152282714844, "__label__education_jobs": 0.0005116462707519531, "__label__entertainment": 5.2094459533691406e-05, "__label__fashion_beauty": 0.0001226663589477539, "__label__finance_business": 0.0001316070556640625, "__label__food_dining": 0.00020956993103027344, "__label__games": 0.0005965232849121094, "__label__hardware": 0.000980377197265625, "__label__health": 0.0002428293228149414, "__label__history": 0.0001785755157470703, "__label__home_hobbies": 6.29425048828125e-05, "__label__industrial": 0.00023603439331054688, "__label__literature": 0.00016939640045166016, "__label__politics": 0.00018727779388427737, "__label__religion": 0.0002682209014892578, "__label__science_tech": 0.01351165771484375, "__label__social_life": 6.586313247680664e-05, "__label__software": 0.00768280029296875, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.0002160072326660156, "__label__transportation": 0.0003752708435058594, "__label__travel": 0.000141143798828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58108, 0.03656]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58108, 0.22641]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58108, 0.87611]], "google_gemma-3-12b-it_contains_pii": [[0, 4813, false], [4813, 10334, null], [10334, 17083, null], [17083, 24223, null], [24223, 27458, null], [27458, 34000, null], [34000, 38676, null], [38676, 41955, null], [41955, 45809, null], [45809, 51387, null], [51387, 57187, null], [57187, 58108, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4813, true], [4813, 10334, null], [10334, 17083, null], [17083, 24223, null], [24223, 27458, null], [27458, 34000, null], [34000, 38676, null], [38676, 41955, null], [41955, 45809, null], [45809, 51387, null], [51387, 57187, null], [57187, 58108, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58108, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58108, null]], "pdf_page_numbers": [[0, 4813, 1], [4813, 10334, 2], [10334, 17083, 3], [17083, 24223, 4], [24223, 27458, 5], [27458, 34000, 6], [34000, 38676, 7], [38676, 41955, 8], [41955, 45809, 9], [45809, 51387, 10], [51387, 57187, 11], [57187, 58108, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58108, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
bd6b2264bd9a80d3291b8dba66ef35e400e494eb
|
[REMOVED]
|
{"Source-Url": "https://research.chalmers.se/publication/523792/file/523792_Fulltext.pdf", "len_cl100k_base": 14504, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 73395, "total-output-tokens": 21040, "length": "2e13", "weborganizer": {"__label__adult": 0.00040841102600097656, "__label__art_design": 0.0004019737243652344, "__label__crime_law": 0.0002620220184326172, "__label__education_jobs": 0.001049041748046875, "__label__entertainment": 7.784366607666016e-05, "__label__fashion_beauty": 0.00019931793212890625, "__label__finance_business": 0.00018775463104248047, "__label__food_dining": 0.00035953521728515625, "__label__games": 0.0006623268127441406, "__label__hardware": 0.0006666183471679688, "__label__health": 0.0005831718444824219, "__label__history": 0.0002837181091308594, "__label__home_hobbies": 0.00010448694229125977, "__label__industrial": 0.0003371238708496094, "__label__literature": 0.00044083595275878906, "__label__politics": 0.0002605915069580078, "__label__religion": 0.0005474090576171875, "__label__science_tech": 0.0183868408203125, "__label__social_life": 0.00011390447616577148, "__label__software": 0.0045318603515625, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.0003237724304199219, "__label__transportation": 0.0005307197570800781, "__label__travel": 0.00022077560424804688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74840, 0.02694]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74840, 0.47937]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74840, 0.84858]], "google_gemma-3-12b-it_contains_pii": [[0, 516, false], [516, 2910, null], [2910, 6431, null], [6431, 9464, null], [9464, 11871, null], [11871, 14360, null], [14360, 16366, null], [16366, 19224, null], [19224, 21878, null], [21878, 24899, null], [24899, 27674, null], [27674, 30581, null], [30581, 32293, null], [32293, 34865, null], [34865, 38052, null], [38052, 41235, null], [41235, 41773, null], [41773, 43795, null], [43795, 46811, null], [46811, 49929, null], [49929, 51530, null], [51530, 54239, null], [54239, 56892, null], [56892, 59529, null], [59529, 62215, null], [62215, 65562, null], [65562, 68934, null], [68934, 71970, null], [71970, 74840, null]], "google_gemma-3-12b-it_is_public_document": [[0, 516, true], [516, 2910, null], [2910, 6431, null], [6431, 9464, null], [9464, 11871, null], [11871, 14360, null], [14360, 16366, null], [16366, 19224, null], [19224, 21878, null], [21878, 24899, null], [24899, 27674, null], [27674, 30581, null], [30581, 32293, null], [32293, 34865, null], [34865, 38052, null], [38052, 41235, null], [41235, 41773, null], [41773, 43795, null], [43795, 46811, null], [46811, 49929, null], [49929, 51530, null], [51530, 54239, null], [54239, 56892, null], [56892, 59529, null], [59529, 62215, null], [62215, 65562, null], [65562, 68934, null], [68934, 71970, null], [71970, 74840, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74840, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74840, null]], "pdf_page_numbers": [[0, 516, 1], [516, 2910, 2], [2910, 6431, 3], [6431, 9464, 4], [9464, 11871, 5], [11871, 14360, 6], [14360, 16366, 7], [16366, 19224, 8], [19224, 21878, 9], [21878, 24899, 10], [24899, 27674, 11], [27674, 30581, 12], [30581, 32293, 13], [32293, 34865, 14], [34865, 38052, 15], [38052, 41235, 16], [41235, 41773, 17], [41773, 43795, 18], [43795, 46811, 19], [46811, 49929, 20], [49929, 51530, 21], [51530, 54239, 22], [54239, 56892, 23], [56892, 59529, 24], [59529, 62215, 25], [62215, 65562, 26], [65562, 68934, 27], [68934, 71970, 28], [71970, 74840, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74840, 0.01144]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
12efeadd89be601f9428d3f85735516b25db757c
|
Faceted search with a large amount of properties
Exploring opportunities for faceted search for an e-commerce application with a large amount of dynamic properties
*Master of Intelligent Systems Design thesis within Findability and Human Computer Interaction*
Per Fredelius
Department of Applied IT
CHALMERS UNIVERSITY OF TECHNOLOGY
Gothenburg, Sweden 2013
Report No. 2013:014
ISSN: 1651-4769
## Contents
1 Abstract
1.1 Keywords .......................................................... 2
2 Introduction
2.1 Prototypes .......................................................... 3
2.2 Questions ............................................................ 3
2.3 Tasks faced .......................................................... 3
2.4 Outline ............................................................... 4
3 Background
4 Methods and Tools
4.1 Usability - Affordance ............................................. 6
4.2 Information retrieval - Precision and Recall .................... 6
4.3 Faceted search: categories, tags, properties and values .......... 6
4.4 Introducing a few usability heuristics .......................... 7
4.5 Glossary ............................................................ 8
4.6 Literature study .................................................... 8
4.7 Tools ............................................................... 9
5 Suggested concepts
5.1 Visual mapping of articles to axes .............................. 12
5.2 Incorporating multiple axes navigation into result list ........ 12
5.3 Preference collection .............................................. 14
5.4 Creating custom filters by dragging from data elements ....... 14
5.5 Dynamic filter lists ............................................... 14
5.6 Expand with keywords ............................................. 15
5.7 Percolation on article creation .................................. 15
5.8 Sunflower navigation .............................................. 17
1 Abstract
Qalixa.com is an online e-commerce meta search engine that consolidates articles from a growing number of retailers from most lines of retail business. The articles in store entail a variety of meta data types such as properties, tags, category structures and free text.
In this project, the author tries to find ways to exploit the meta data and facilities available through a modern Enterprise search solution to create suggestions on how to build a User interface with sufficient competence to deal with this complex search space in an empowering manner, yet represent a simple enough experience for the end user to allow overview as well as a simple as possible learning curve.
A few concepts are prototyped and a variety of options of future work are analyzed. Characteristic issues with applying faceted search to the particular case are found and analyzed to some detail. Concepts that are tested to some extent are selectable facet filters, grouping of properties with clustering, fuzzy logic in facet search and implicit ordering of documents given filter order.
1.1 Keywords
Faceted search, Enterprise Search, Dynamic properties, Human-Computer Interaction and Information Retrieval
2 Introduction
Qalixa is a small start up company that drives the development of a search engine hosted at qalixa.com. Qalixa aims in this engine to deliver a free-to-post advertisement service, both for the business to business and for the business to consumer segment.
The purpose of this thesis is to find ways to bring further insight into the particular case that Qalixa represent in the scope of faceted search and user interface design for faceted search, and if possible present opportunities for improvement for Qalixa and for users of such technology in general.
This is realized by analyzing the most prominent issues with the current solution, proposing a set of concepts to alleviate these problems, implementing prototypes for these suggestions and finally evaluate these solutions to bring further insight into the challenges of the case.
2.1 Prototypes
The first proposed solution is that of selectable facet filters. It connotes presenting the user with a set of properties recurring in the set of results currently retrieved, and allow the user to select from these properties and in so turning the property into a filter. Filters are presented in a separate list from the properties. In such, the user is allowed to manage preferences of interest separately from a larger set of properties that are not of immediate interest. The filters can be expanded to present a set of applicable filter values for the user, allowing him/her to select a span.
Secondly, to deal with cases where the number of applicable properties are large, methods for arranging such properties in a logical way are needed. To supply structure properties were arranged in groups using a clustering algorithm.
In order to avoid false negatives in a high entropy and heterogeneous data base, it is desirable to have filters that does not work in a inclusive/exclusive but promoting/demoting manner. This method can be compared to the idea of fuzzy logic and is in this thesis referred to as fuzzy facets. In that way, articles that happen to lack definitions on particular properties that would otherwise be of the desired kind can be found in the list, albeit further down in the list. In the prototype and in this thesis a simple method for achieving this behavior is suggested.
Another feature is that of implicit ordering of documents given filter order. Given a moderate amount of interesting articles that the user wants to compare in detail, being able to compare properties closely should make it desirable to rearrange the order of articles based on values of properties. In this thesis a mean for arranging documents based on the order of filters in the filter list is proposed.
2.2 Questions
- How can faceted search be exploited for cases where dynamic properties are prevailing?
- How can Solr or similar tools be used in a Qalixa?
2.3 Tasks faced
- Production of several GUI concepts for enterprise search and for the e-commerce domain
- Evaluation of search frameworks
• Configuration of Solr for enterprise search
• Development of plug-in for optimization of MySql to Solr data migration
• Implementation of e-commerce search GUI prototype with focus on faceted search
• Analysis of problems with the prototype and conclusions taken
• Implementation of a clustering algorithm
• Formalization of a theoretical model for user centric search in the information retrieval domain
2.4 Outline
This thesis describe the project by first describing the case being studied in chapter 3 after which some relevant methodology and tools for the domain are described in chapter 4. Then, in chapter 5, some concepts where thought up taking inspiration from challenges and opportunities observed. Some of these concepts are turned into a working prototype, that is described in chapter 6. The working prototype is then evaluated and problematized in chapter 7. Lastly, the problems are discussed further and some ways of tackling the issues are suggested in chapter 8.
3 Background
The project consisted of literature studies, concept creation, configuration and implementation of a prototype and finally analysis of the implementation and prospective future work, where a concrete list of issues of the domain could be produced. In the literature studies, the author tried to find potential ways to tackle perceived technical problems of the case in question, search at Qalixa.com. Theoretical studies was made in part as a prelude to the implementation task and in partly after the task, to make use of the lessons learned.
From the outlook it was explicitly of interest to test new search framework technologies. The scope of this project start out from the premise of applying one such technology on the existing database of Qalixa. Another interest was to try out interesting features of this framework to evaluate the opportunity it can bring.
Furthermore, as an academic project, it was of some interest to try to find a research potential. And for the authors sake, to try out interesting technology for the sake of learning.
While user studies could have brought some additional insight and lessons from the prototype, it was outside the scope for the project due to time and resource constraints. It was also deemed that using such method would be premature due to problems with the user experience stemming from the domain problems identified.
Additionally, little data is available from current user behavior. Facilities for analyzing such is not yet in place as the Qalixa venture is still a young one.
In future works, user studies on a prototype using coherent input data can be used for adjusting characteristics of the user interface. And to give an idea of the understandability of the proposed features. Given well designed user tests; it could be concluded whether exotic features actually shorten the time for task completions and how likely such features are to be discovered and understood spontaneously.
**Case study: Qalixa**
Looking at the Qalixa use case a few characteristic opportunities were found. Technical opportunities that are perceived to need a lot of thought in order to leverage the value of the web service. Applying faceted search onto Qalixa present a certain challenge; the number of applicable facet properties can vary greatly between queries, as would show by analyzing the prototype. The total number of properties spanning all articles is unbound as each article can define and carry an arbitrary amount. The relevance and usefulness of any particular property is unknown from the outset.

**Challenge: Empower the user in a complex search space; meta data interactivity**
By its nature, Information Retrieval involves navigating a large search space. Most of the time, the user is left without a map other than that of intrinsic experience derived from previous interactions with retrieval systems as well as his/her mental model of various searchable concepts. There is little aid for perceivable affordance from the outset. In many enterprise search solutions, or verticals, faceted search is a way to provide a map over the search space. The specific qualities of such map in the Qalixa case will need to be exploited.
For text input alone the visual complexity is relatively small. But for faceted search among an arbitrary amount of properties the GUI can become crowded. There needs to be ways of lifting out the most relevant properties given the information available.
One particular interactive element from the current Qalixa web site can be examined for this particular challenge. When browsing by category, a property view can be called up. The property view allow the user to specify preferences and to filter out unwanted items from the set. Visually, the user is presented with a table with a row for each property and cells for each value. Such presentation is common for such use case. However, for some categories the table become impractical as is grows to hundreds of rows (See fig 1).
**Challenge: Performance and latency**
In order to convey affordance, a tight feedback loop is required. Low latency is thus a high priority in any Information Retrieval application. Qalixa has few special characteristics in this regard. Current solutions are deemed inadequate in this regard. While a thorough performance profiling investigation of the current solution likely could help alleviating the situation it is chosen to be outside the scope of this thesis in favor of
investigating alternatives. It is deemed that a technology switch is required for solving those problems.
**Challenge: Information structure and coherence; meta data coherence** The problem of structuring data is in large ubiquitous to Information Retrieval but takes a certain shape when addressing faceted search. Certain problems, like finding synonyms and homonyms, are more pronounced. Merging properties can be seen as a superset of the problem of merging tags. Other related problems are automated categorization. Automated categorization can be applied to documents given statistical models where content is analyzed for finding a likelihood of an object belonging to a certain category. For this case, there is the additional problem of categorizing the meta-data itself, like properties.
While this project will only briefly test out a possible solution to this problem, some effort has been placed into looking for potential future work and recommendations.
4 Methods and Tools
Some inspirations that went into the concepts produced are described below. Concepts such as affordance, precision and recall are important. Additionally, some heuristics were used as inspiration as well as for evaluating the prototype. The heuristics draw their motivations mainly from the idea that a good user interface should tax the attention capacities of the user as little as possible for any given task and that any task completion should be made quickly and with few and small steps.
4.1 Usability - Affordance
The concept of affordances was introduced by James J. Gibson and is commonly interpreted as *action possibilities in an environment given an actors capabilities*. It can be divided into a few subcategories, perceived, as affordances that is known by the actor, hidden, as those that are not perceived and false affordances, as those that are perceived but not actual. (1)
4.2 Information retrieval - Precision and Recall
Precision and Recall are central concepts for evaluating information retrieval systems. Precision denotes a measure of the fraction of returned results that are relevant to the query. Accuracy denote the fraction of all relevant items that are contained in the result set. These terms imply a common method for evaluating IR systems, by measuring precision and recall of IR systems they can be evaluated against each other. (2) However, for testing purposes, this method requires a pre-made gold standard. A set of items and queries with relevancy mapping.
4.3 Faceted search: categories, tags, properties and values
Faceted search is the name of an Information Retrieval related HCI methodology where the user is allowed to navigate the search space by selecting properties and values in order to narrow down the search result. Technically any field with a value belonging to a document in the search space
can be used as a *facet*. Examples of facet fields can be “keyword”, “tags”, “length”, “description”, “color” etc.
### 4.4 Introducing a few usability heuristics
When interacting with an information retrieval or information exploration system, the user is occupied with the task of finding one or many sets of information. It is of interest to the user that this task is made as simple as possible. An interpretation of what simple means in this context follows. But first, we point out a few complementary heuristics.
A graphical interface present a number of elements to the user. An element here constituting a single unit of perceivable information and sometimes one or more perceived or hidden affordances. Each piece of information adds a constant amount of complexity to the overall complexity of the interface. For each actual affordance added, there is also a non negligible amount of complexity added, although this depends more heavily on the users perception. We should distinguish between easily measurable structural complexity and the more elusive perceived complexity perceived and felt by an agent.
Non perceived affordances should not in themselves cause added perceived complexity and mental load. Potentially, perceived and well understood affordances should not cause very much affordances either. Mental load is instead expected to be caused by elements not communicating their affordances well enough, or in other words, the actual affordance can not be unambiguously determined. It is expected that a perceived and well understood affordance will add a small constant mental load per item while a affordance that is not well understood could add a considerably higher load, in worst case causing the user to give up.
Making sure that the maximum amount of hidden affordances are kept low and at the same time allowing such to be easily explored through interaction should help the user to quickly lower the perceived complexity of the application. An affordance should be easily explored if similar actions produce similar results and if the variation of perceived possible actions is low.
For the sake of this project, designing a *simple* interface is interpreted to denote a minimization of above described complexity over the course of a task completion. A task can be more or less well defined. It could be described as ‘finding a nice affordable car’ or finding a car of a particular brand from a particular year of a particular color. This distinction, between a more and a less well defined task, is also the distinction between Information Retrieval, for well defined, and Exploratory Search, for less well defined tasks.
Two relevant heuristics can be defined for making sense of this idea of interface simplicity. The *amount of perceivable elements in a given scene* and the *amount of perceivable elements over the course of a task execution*. In order to minimize the latter, the former needs to be minimized, as well as the amount of scenes that needs to be faced by the user before the task is completed.
Simplicity is not the only requirement for a usability however. Understandability is also needed. Understandability can be seen as the ease of perceiving the affordance of an element given the element’s and related element’s presented information.
Another concern is that the user needs to solve his task quickly. Two things can be seen to consume time given the conceptual model so far, technical latency for switching and changing the scene, and time needed to perceive relevant elements in the scene.
4.4.1 Relation of precision and recall to affordance
When looking at affordance in the realm of exploratory search (ES) some related heuristics can be found. It should be desirable to encounter as little complexity during a task completion as possible, as described above. Another way of interpreting this in the realm of ES is that we should maximize precision and recall while minimizing the complexity of the interaction.
Now, there can be many interpretations of this goal. One naive and often used solution is to present a single element of affordance with a wide space of action possibilities, or in concrete terms, a search phrase input form. Varying the amount of tools available in the environment could diminish the amount steps needed for finding a sought article, or it could potentially allow a faster growing recall curve.
4.5 Glossary
<table>
<thead>
<tr>
<th>Term</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lexicography</td>
<td>study or structure of word relatedness on the basis of semantics and word features</td>
</tr>
<tr>
<td>Taxonomy</td>
<td>systematic classification of biological organisms, also classification of information entities in general</td>
</tr>
<tr>
<td>Semantic</td>
<td>the meaning of a word</td>
</tr>
<tr>
<td>Ontology</td>
<td>a study or structure of meaning of words</td>
</tr>
<tr>
<td>Homonym</td>
<td>two words of equivalent spelling but with different meaning</td>
</tr>
<tr>
<td>Antonym</td>
<td>word with opposite meaning to another word</td>
</tr>
<tr>
<td>Polysemy</td>
<td>word that can have multiple related interpretations</td>
</tr>
</tbody>
</table>
4.6 Literature study
Various articles were studied to acquire inspiration and finding potential inspiration for the Qalixa case. A few were selected for having future potential for the case.
(3) demonstrate methods and analyze a prototype implementation, CREW, for using Wikipedia as a source for creating semantic data and deriving homonyms and synonyms for tags. This could be used for processing untidy data from third party sources and to aid single article publishers to clearly define what they mean. Also, building taxonomies and categorizing new articles could potentially benefit from such technology. A cited article (4), goes further into details on how to obtain semantic relatedness from Wikipedia’s network of links.
(5) details various methods for semi-automatic clustering; giving the user the ability to influence how clusters are formed through a proposed user interface. This could be useful for giving retailers and users the ability to control how their articles are categorized. (6) suggest another method that focus in particular on leaving the labeling task up to a supervising user.
design and analyze a GUI application, *Stuff I've Seen*, for managing search history in combination with a faceted search. The application allows users to conveniently search through articles previously viewed. The application is evaluated with extensive user tests.
try to attack the problem of information overload by utilizing clustering techniques and corresponding GUI design. A clustering based visualization GUI for navigating a large set of items is presented. The problem of discerning concepts of varying significance is addressed by a *fisheye* concept where more significant words take up more space in the GUI.
present an optimized variant of Lloyd’s algorithm using kd-trees. Lloyd’s algorithm is a popular algorithm for clustering (10).
### 4.7 Tools
While there are numerous examples of open source frameworks for information retrieval, and while a exhaustive list is hard to find and probably impractical to maintain, a few frameworks came out as interesting.(11)
A few distinguishing traits can be seen among the alternatives out there. Firstly, there are either open source or closed source alternatives. Closed source alternatives where dismissed at an early stage. The motivation could be that it would narrow down future freedom for the project and Qalixa if put in use, another motivation is that of project scoping and the need to being able to quickly evaluate the solution.
Secondly, many of the search systems out there seems to be either ports or projects building upon Apache Lucene. Examples of the former are Ferret(12) and Lucy (13). Examples of the latter are Solr and ElasticSearch.
Thirdly, frameworks are either built upon an existing database technology, or built as a database system of its own. Lucene and its derivates is an example of the latter, while Sphinx and Riak search could in part be seen as the former (14). Also, Hibernate search could be grouped with the former with a stretch.
#### 4.7.1 Lucene and Solr
All in all, the Lucene family was perceived as the group of engines with the most prevailing community support and at a glance, the largest feature set of the engines looked at. Lucene itself stands out in that it is built for being integrated into a host application(15) rather than being stand-alone out-of-the-box deployable like Solr(16). Lucene requires you to at least build a custom data import layer in a jvm language while Solr allows you to design an import layer meeting many use cases in XML. Similarly, handling of the input query and the process of turning it into a query understandable by the framework is slightly more involved or more manual in the Lucene case. For Lucene, components are combined in code while Solr lets you combine components in XML while being guided by a systematic markup style. Lucene arguably gives you some additional freedom in defining the interface to the search framework, while its derivates adds additional features (17)(18).
Features of Solr include faceted search, geospatial search, XML/JSON/CSV REST-like api, XML configurability. Lucene also cover faceting to some extent (19). Among web services that are currently using Solr are Instagram, The Guardian, SourceForge, eHarmony, Netflix, eBay (Germany), digg and reddit (20). Solr supplies means for running clustering on search results
and documents using the Carrot2 clustering engine (21). It is meant to be highly customizable, allowing plugging in implementations of for example clustering algorithms. Solr has shown to give significant speedup for large scale services, such as Twitter(22).
4.7.2 ElasticSearch
ElasticSearch is another engine in the Lucene family and the closest competitor to Solr. While it is younger and such would have had less time to mature, it seems to have gained a lot of traction. It sports a few features that are by some deemed as game changing. It is built from the ground up with distributed search in mind (23) and has been shown to perform significantly better in some cases where a large index was continuously updated while also performing queries (24). Solr is however deemed faster for indices that seldom change. ElasticSearch is made for easy deployment and it doesn’t require defining a schema for basic use cases (25). An additional feature is the Percolator, or reverse search, that allows storing queries and then check which queries are matched by submitting a document (26). Among web services that are currently using ElasticSearch are Mozilla Foundation, SoundCloud, StumbleUpon, Klout, IGN, Sony Computer Entertainment (27) and StackOverflow (28). Developer usability wise ElasticSearch is customizable by the same JSON REST interface that everything else is passed through. This could be a potential advantage to the rigid file XML file structure of Solr (25).
4.7.3 Sphinx
Another search framework is Sphinx. Some distinguishing features when compared to the Lucene family are tight integration to relational databases, for example MySQL, and a focus on speed, being seemingly faster than Solr for some tasks such as indexing and database queries(29). In fact, one benchmark, albeit potentially outdated, shows Sphinx consistently outperforming Solr in terms of both memory and CPU (30). Sphinx’s facilities for faceted search are a bit different. Allegedly allowing a greater set of use cases but with seemingly a bit more manual work before running(31). Another distinction is the source code licence. Solr is Apache licenced while Sphinx is licenced using GPL2 while the maintainers offer commercial licensing as well. Sphinx is used for such services as Craigslist and Groupon and is known to work with more than 25 billion documents and more than 300 million queries per day (32). Sphinx is written in C++ which may bring additional entropy to a Java centric project.
4.7.4 Solr extensions and supporting applications
Some tools that can be used to extend Solr is worth a mention. SolrJ is a java library that serves as a java search interface to Solr that is otherwise talked to using a REST API. Using SolrJ some of the flexibility of Lucene can be recovered (33). Tika is a parser library designed to work with solr, although it is mainly aimed at extracting data from documents (34). Zookeeper is a framework for coordinating distributed processes. It is planned to become an integrated part of Solr for version Solr 4(35). Apache Nutch is a search engine solution that adds a crawler and integrates Tika with Solr (36).
4.7.5 Search framework comparison
For this project, Solr was selected. This is not an obvious choice however. Three frameworks are seen as runner ups - Solr, ElasticSearch and Sphinx. It is appreciated that the choice of one very much depends on the problem. On the basis of scalability, ElasticSearch seems like the best choice at the moment, due to it being built with distributed storage in mind. However, Solr is currently spearheading an initiative to integrate cloud capabilities more tightly (37). Solr sports its clustering engine. Although clustering could be seen as a feature of uncertain current footing in the domain, it may hold future potential. Solr is arguably the platform of choice for experimenting with such techniques, given its Carrot module. Performance tests of Sphinx show promise and the fact that it is used by some large scale e-commerce vendors could be enough to motivate testing it out.
4.7.6 Search frameworks as NoSql stores
Solr and its cousins are structurally very similar to NoSql data stores (38). The distinction is arguably one of marketing strategy (39) although there may be some distinctions in the feature set as well (40). Likewise, some data stores implement many features of search frameworks (14).
4.7.7 Backbone
Backbone is a javascript module and framework for lightweight MVC modeling. It is regarded as a library rather than a framework given that it imposes little restriction on the software architecture(41). It supplies classes (or an approximation for classes) for such things as Collections, Models and Views and facilities for controlling event propagation between them.
4.7.8 Pure Functional and Functional Reactive programming for client side web applications
Some time for shallow testing was given to the Elm language (42), a new programming language that borrows ideas from the pure functional world of Haskell and that focus especially on the idea of Functional Reactive programming (FRP) (43). It compiles to Javascript and is meant to be used for developing graphical web clients. While a more throughout investigation of the pros and cons of using such a language for a complex web application is left outside the scope of this project, a few remarks can be made: The succinctness of implementations of some recurring web application features when using Elm can be demonstrated in the examples on its official web site (44). The Flickr Api example should be the one most relevant to Information retrieval clients(45). However, small implementation tests showed that debug printouts can be very sparse, potentially making continuous implementation challenging.
An alternative to Elm that was also tried to some extent was Fay (46). Fay takes the approach of implementing a subset of Haskell for compilation to Javascript. A second important design choice is to make the code generated as readable as possible, potentially making debugging easier. However, Fay does not give the FRP paradigm any precedence, but there is an example on how FRP can be realized using javascript FRP libraries in Fay (47).
5 Suggested concepts
A few concepts where produced, some of which were made into mock-ups.
5.1 Visual mapping of articles to axes
Some initial ideas explored putting search result items in a map like context, in order to visualize the grouping of items along facets in a continuous way. This could be particularly effective for facets that are of number type (See fig 2). Such feature can allow the user to attain an overview of the distribution of properties over a dimension and also allow faster navigation by supplying an additional empowered level of freedom.
Additionally, through interaction, drag and drop or other, the user could be allowed to control both what facets are bound to each dimension; as well as what value should be brought into view if the interval would not fit on screen. The later could either be controlled by dragging the values along the axis or by dragging from a list of values to the axis (See fig 3). The motivation for such feature could be that it would make it easy for the user to switch what dimensions are to be traversed when selecting out of the result set, and as such, allow the user to navigate a high number of dimensions more quickly.
5.2 Incorporating multiple axes navigation into result list
A variation of axis mapping of facets is to have columns for each result group or facet value in an ordinary result list. Such solution could arguably be more friendly to an adaptive layout while
Figure 3: Visual mapping 2
Figure 4: Horizontal drag scroll on facets
somewhat losing the aspect of communicating the distribution of items along a dimension (See figure 4). It would still have the benefit of supplying an additional degree of freedom that can be used for navigation.
In order to make scrolling to adjacent groups drag scroll could be used. This can expand the interactive region as the user does not need to access periphery scroll controls and doesn’t need to know about manipulator key such as shift-scroll. However, drag scroll has so far been avoided for most desktop inclined application; probability for acceptance might be limited.
5.3 Preference collection
Also referred to as filter tray. In order to separate various tasks and as such make the selection routine for large heterogeneous data sets less daunting, selection of properties and selection of their values can be made separate. As suggested by figure 5 properties would reside in a separate element from the so called filter tray. The filter tray would house those properties selected by the user to perform filtering on the result set. Each filter represent a property and supply the necessary controls for selecting a scope of values for that property to select on. Such preference collection can have the potential benefit of both reducing the number of elements shown and to bring the properties useful to the user closer at hand.
If expanding on the concept, there could be ways for the user to build customized collections of preferences. Perhaps as the user return to a category previously visited, the properties and values that was used last time around could be already selected and appearing in the filter tray.
5.4 Creating custom filters by dragging from data elements
As a proposed solution towards easing the process of narrowing down the result set; the user should be able to easily find and select properties and values. Properties may not always be findable in one place, they may live in various spaces of the GUI. The information displayed in the GUI comprise articles, properties or property values. All those elements could potentially be used for rephrasing a query. Properties and values could be made into filters and articles could be seen as a proxy for other keywords, properties or values; either from looking at its own properties, or by looking at its percolated values (See Percolation on article creation).
This gives the idea of allowing the creation of filters from such elements in order to ease exploratory search. For example, values present in the columns of the result list could be made draggable and turned into a facet filter when dropped in the list of filters (See figure 5).
5.5 Dynamic filter lists
Filters, as mentioned above, can be managed in various ways to further lift the capabilities of the user. Rearranging such lists could be made meaningful. One proposition is to make the rearrangement of filter lists to influence the sorting of results. The top most filter, if it represent an interval, should decide the primary sorting feature. Consecutive filters will decide the sorting of items within each group of items that share the same value of the facet represented by the previous filter. For example, if there are two filters, ‘price’ and ‘length’, results will be sorted by price first and length second.
A second potential of dynamic filter lists, could be that of adapting the selectable range of values based on what ranges are currently available in the result set. If the price range is changed in the price filter, some part of the length range showing in the length filter may no longer be valid. As such, its selectable range should be adjusted. However, this may prove problematic as it can sometimes be desirable to show ranges that are no longer applicable. A solution to that however, could be to just ‘grey-out’ values that correspond to empty result sets.
5.6 Expand with keywords
Another venture that could be explored is that of adding keywords to a query, as a lightweight way of exploratory search. There should be a fair amount of semantic reasoning behind the suggested words however. Direct synonyms should be searched implicitly and not be part of the suggestions. So suggested words should be related but not equivalent. Additionally, suggestions could be used in order to select among homonyms (see fig 7).
In order to meet the need for semantic relatedness data, above suggested technology for using Wikipedia to that end could be used. While many other applications of the Wikipedia technology likely are possible, it is left out of the scope of this thesis.
5.7 Percolation on article creation
Percolation, an Elasticsearch specific feature, could potentially be used in a scenario where a user is about to create an article for a product, to inform the user how to expose an article for the most relevant search expressions. Percolation works by indexing selected queries so that
Figure 6: Expand search using related keywords
Figure 7: Expand search using related keywords
documents can be tested onto them, seeing if using such query would turn that document into a valid result. This notion could be used to incentivise the user into tagging and otherwise writing the article for maximum precision. Although care should be placed in avoiding incentivising tag abuse instead. By only showing a few queries most relevant to the article this should be avoided. (See fig 8)
Further, percolation could be used for analyzing the impact of entire product feeds coming from retailers. If used as such, it could help incentivise retailers into delivering cleaner feeds and also conveying value brought by the search engine. Also, manual tagging and categorization of multiple products at once could be aided. (48)
Other uses could be thought of as well, but require further feasibility studies. Percolation could potentially be used for aiding in maintaining and creating tags from queries for example.
5.8 Sunflower navigation
Sunflower navigation expands on the expand with keywords concept and tries to map it onto dimensions; creating a graphical vector representation of each significant keyword in the query phrase and overlaying related, not yet selected, keywords, allowing the user to turn the suggestions into actual search keywords. (See figure 9)
For such concept to work, there needs to be an adjacency heuristic for each pair of tags. Arrows, or vectors can then be put into layout by force based means, to visualize concept closeness. This way, exploring related words can be made quicker as the user is encouraged to first select on subgroups rather than individual words.
Figure 9: Search map/Sunflower
Figure 10: User interface components
6 Prototype
6.1 User interface
The prototype selects from various previously mentioned concepts. Concepts that were focused on were 5.3 and 5.5. 5.4 was tried to some extent but discarded as the overall design did not supply enough useful data elements for such concept to be useful. An additional set of features not outlined above, such as 6.5.5 and 6.5.6, were implemented and are described below.
The client consist of a search form, a side pane populated with groups of attributes applicable for the search phrase currently entered, a list of results and a “filter list”. The latter being initially hidden.
The attribute list orders attributes in small groups of some conceptual similarity determined by the clustering algorithm. If the user selects an attribute from the list, it will be added to the filter list. The internal representation of the filter list is traversed when changed to produce a new query once a property value has been selected. Filter components can be rearranged by drag and drop. (See fig. 10)
6.2 Client architecture

An ideal representation of the application model can be seen in fig 11. The search form is the initial interactive component for the user. The search form submit an input phrase to the search framework interface that in turn communicate with the search server. The interface notify both result list and facet list when a response is retrieved. The user can then interact with the facet list to add filters to the filter list. The filter list notifies the search framework when a meaningful change has happened and again the facet and result list is updated. Optionally, filter content is updated as well.
The actual implementation (see fig 12) consist of an ApplicationModel that house all other models except the search interface, SolrInterface. It also handle most of the event bindings and propagations. The application model contain the collection of Attributes, the collection of Filters, the collection of Results, the collection of Groupings and the Query model. The Attributes collection carry the most model logic, collecting all Attribute models. The Attribute model in turn each own a collection of Values that hold Value models. The Grouping and Filter collection
on the other hand only hold references in the form of String ids. At any time when an attribute that form a filter needs to modify its model, it has to be found inside the Attributes collection. This was considered a necessary detail at the time of writing, due to the workings of Backbone, but might be done more elegantly using actual references instead of ids.
There are several view classes used to reflect the state of the models in the GUI. The primary ones are the FacetList, the FilterList and the ResultList. In addition there is FilterComponent, being instantiated as subviews of the FilterList and the SearchForm, representing the form element used for search phrase input.
In addition to the above, there are toolbars. As models and as a view. These are added to allow removing and locking filters. The locking of filters denoting locking selectable values from being changed as the result set is changing. (49)
6.3 Algorithmic needs
Looking at challenges that need to be met and the solutions proposed, some algorithmic needs could be discerned. In order to make the data workable and coherent, documents and attributes need to be matched with near equivalents. Synonyms, homonyms and polysemic words need to be recognized somehow. Articles need to be fitted into a suitable category. Articles may be coming from divergent sources, referencing diverse category structures and strictly incompatible taxonomies. Sometimes articles may supply little or no pre existing category assignment and perhaps data to go by.
It should be desirable to automatically extend an existing category structure in some cases, for example when a category is considered to overflow, when there are too many items in a category
Figure 13: Property adjacencies from articles
Figure 14: Adjacency matrix
Figure 15: Clustering using centroids showing the three first dimensions/properties
for it being easily navigated. It could perhaps be desirable to show and hide various branches of a complex category tree depending for example on the size of the branches given a search phrase.
Given a large amount of properties given by search results, to be selected from for allowing further refinement, there needs to be ways of give structure in order to lower the complexity faced by the user. Applying clustering techniques could meet this problem to some extent.
6.3.1 Grouping Algorithm
As a partial response for the need of adding some additional structure, a grouping algorithm was implemented. It is added to the Solr execution as a plugin component to a custom search handler in the Solr configuration. The major part of the algorithm runs as a response to a query, processing the set of articles returned by Solr and producing the relevant output that is attached to the result object returned to the client.
The algorithm begin by generating a adjacency matrix of all the properties of articles of the result set, incrementing the adjacency for every time two properties occur together (Fig 13). The adjacency matrix is then interpreted as an Euclidean space with as many dimensions as there are properties (Fig 14) and passed into an implementation of Lloyd's k-means clustering algorithm, from which a set of groups/clusters are produced (Fig 15). Below is a simplified version of the clustering algorithm in pseudo code:
```pseudo
lloyd = function () {
centroids := randomCentroids
until (time_out or goodEnough):
for each point:
assignToCluster(point)
centroids := (for each cluster:
recompute centroid of cluster)
}
assignToCluster = function (point) {
closestCluster := find cluster closest to point
assign point to closestCluster
}
```
The algorithm is done when the time has run out or when the centroids move sufficiently little per iteration.
6.4 Data importer
A data importer was needed on the server side to correctly and efficiently transfer data from the original MySql database to Solr. While a standardized data import plugin exist for Solr, it was deemed a bad fit for the needs of this project. Import schemes made for the standard importer ran slow and took away the possibility of multi valued properties. The custom importer showed a very significant speedup. While there may be means of reaching the same speedup without custom code it could not be found.
6.4.1 Modules
The import handler consists of a worker and a front end class. The front end class, FlatTableImportHandler, is called from Solr when there is an url request to the corresponding request handler specified in the Solr configuration. With the url a command is passed. If the command is “full-import” the worker is started in a thread of its own. Any other command, or the lack of a command will return a status report for the worker.
The worker, ImportWorker, execute a MySql query specified in the Solr configuration and read the response line by line. As long as a row has the same unique id, properties specified by the row will aggregate into a document. The document will be submitted to Solr once the id has changed. This is done so that the document in the Solr database will not risk being overwritten.
6.5 Feature roundup
6.5.1 Filter selection and deletion
The user specify property preferences of the sought results by choosing properties from a list. Chosen properties will turn into filters that also display applicable values for the result set, allowing them to be chosen from. Choosing one or more values will cause the document set to be filtered to match the property-value pairs chosen. In non fuzzy mode, it should match at least one property-value combination per filter to not be excluded.
6.5.2 Switchable filter value updates (Padlocks)
In order to give a dynamic view of what properties remain meaningful as preferences are specified, the padlock button allows switching whether the set of values for a particular property should be dynamically updated as the result set changes. The set of values that remain if the filter is unlocked corresponds to those property values that can be found in the result set.
6.5.3 Sliders for enumerable facets (deselecting intermediate values?)
Some properties are recognized as enumerable, or of number type. It should in most cases be interesting for the user to select a span rather than individual values for such cases, avoiding the extra work of selecting a large number of similar values. Therefore, a slider is used for controlling such value selection.
6.5.4 Switchable sorting and sorting customization by drag and drop
Without adding additional elements to the scene, a fine grained control for the sorting order of results is made possible by looking at the ordering of filters. For an attentive user, it can be seen that the sorting order is made by each column in order. The rationale for this is that the prioritization of the filters, or preferences of the user, can be signified by the visual order. The order of the columns in the result list is determined by the order of filters as well. The filter order can be rearranged by drag and drop.
6.5.5 Dynamic column creation
What properties are to be displayed in the result list is determined in part by a predefined list of default properties and in part by the properties corresponding to the filters in the filter list. This allows a dynamic display of articles where focus is put on those properties that are likely of most interest to the user.
6.5.6 Fuzzy facets
The user has an option of enabling facet fuzziness. This causes the filters to become promoting/demoting of matching/non-matching articles rather than excluding filtered articles entirely. This feature is realized with Solr using query boosting. For example, the following generated query will only retrieve articles that match the preferences exactly:
\[
\text{car AND (color:blue AND price:[100 TO 10000] AND model:Toyota)}
\]
However, adding wildcard queries with a lower prioritization/boost will allow previously unmatched articles to appear towards the end of the result list:
\[
\text{(car AND (color:blue}^1000 \text{ OR color:}^100000^{* TO *})^0.1)
\]
\[
\text{AND (price:}^100000^{* TO *})^0.1\text{ OR price:}^100000^{* TO *})^0.1\text{)
}\]
\[
\text{AND (model:Toyota}^1000 \text{ OR model:}^100000^{* TO *})^0.1\text{)}
\]
There is a reservation on this solution; articles that does not define the property will still not be included. Depending on the reliability of the data this may or may not be desirable. An addition that should include even such articles with an even lower prioritization would be as below. This possibility is not demonstrated in the prototype however:
\[
\text{(car AND (color:blue}^100000 \text{ OR color:}^{100000}{* TO *})^10)
\]
\[
\text{AND (price:}^100000^{* TO *})^{0.9} \text{ OR price:}^100000^{* TO *})^{0.1}\text{)
}\]
\[
\text{AND (model:Toyota}^100000 \text{ OR model:}^{100000}{* TO *})^{0.1}\text{)}
\]
\[
\text{OR car}^{10}
\]
7 Analysis
7.0.7 Intended use case
The intended use of the prototype is to first enter a generic search phrase in the search form and hit ‘Search’. Secondly, zero or more properties are selected from the left according to one’s preferences. Thirdly, selected properties, now turned into filters in the filter tray above the result list, can now be used for narrowing down the results. Selecting multiple values from the same property will cause a non-exclusive multi-value selection on that property. Or in other words an ‘OR’ selection will be made with all selected values in one particular property. If multiple filters are used, an exclusive selection will happen, combining the filters with ‘AND’ operation.
The intention is that both retrieval of specific known articles, using known properties of such articles, and exploration of the data set within a certain domain, should be made easier. Allowing
direct feedback when alternating both properties and property values should allow exploration while supplying relevant properties should allow the user to quickly zoom in on an intended article. Additionally, adding and subtracting filter selections should allow the user to identify corner cases, highlighting what articles the user is opting out.
7.0.8 Observed issues
Some issues can be observed when looking at the prototype. Issues that are considerable in scope but that if solved, should significantly elevate the potential of the approach suggested by the prototype. Some of these challenges are arguably hard to avoid for any attempt at dealing with e-commerce from the perspective chosen by Qalixa and similar ventures, and should emancipate considerable value if solved comprehensively. Many of these issues are expected to be especially relevant when having a great number of dynamic properties coming from varying sources.
The most apparent problem in the prototype is the high amount of property fields that are available for being selected from by the user. Many of the fields are nonsensical when taken out of context of their parent articles or seem to be irrelevant for most uses. Many properties and values are duplicates or are of very similar meaning. Sometimes property names can map to values of varying type or dimension. Some values may be of number type but this is not recognized and as such, the property is not given a slider when used as a filter. Some properties have superfluous characters in their name. Some values are placeholders for non-values, some should be split into multiple values.
Many properties have only a single selectable value. This is arguably a part of a larger problem. At the point of retrieving properties, the amount of applicable values for each property given the current search space is not known (only the number of occurrences of the property fields throughout the result set is known). Retrieving applicable values for a large result set for all properties is supposedly very computationally expensive, however possible, using the Solr REST API. With the current prototype, only the values of the chosen filters are retrieved.
More subtle are the issues that arise once you start the interaction. One possible issue is that of filtering out false negatives. If a certain attribute is not defined for an item it can not really be known if it really should be filtered out or not. The default behavior of Solr is to filter it out, which may be undesirable. Similarly, a lacking in synonym understanding among values should cause problems of false negatives as well.
Dynamic filter options and Padlocks One issue appear as a result of updating selectable filter content dynamically. As the result set is diminished, the set of field-value pairs normally diminish as well. If this is not reflected in the possible preferences of the filters, there is a risk of allowing the user to select a set of preferences that results in an empty result list. In some cases this risk may even be higher than selecting preferences resulting in a non-empty result set.
Always allowing dynamic updates of filters is problematic since the filter mix the trait of excluding existing or including additional articles. When first putting a filter into use, the filter acts to exclude items from the result set. When selecting additional values, the result set grows. Similar growth and shrink pattern is reflected in the set of applicable values and properties. Additionally, whilst using fuzzy facets or working with a large set, the issue is a bit more subtle. Individual articles, properties and values may be excluded by way of becoming less prominent as the set grows. As such, filters or values will become treated as being to insignificant for inclusion, causing it to disappear mysteriously.
The Padlock feature exist as to partially address this problem. To help the user to avoid these cases the padlock button allows the user to manually switch whether applicable filter values are dynamically updated. This is far from a perfect solution as it requires the user to figure out when to apply the dynamic him/herself.
**Summarized, the issues in list form:**
- Missing context for understand property meaning
- Irrelevant properties
- Synonyms not recognized
- Unit or qualitative/quantitative homonyms not recognized
- Basic sanitization and interpretation of names and values are lacking upstream
- Special characters in property names
- Multi values interpreted as single values
- Non-value placeholders interpreted as values
- Insufficient structuring of large amount of properties
- Over-eager filtering, false negatives
- Synonyms
- No defined value for selected property
- Invalid or missing options among filters after query
**7.0.9 Technical issues**
Some issues are purely technical. Some property names cause problems due to Solr not being able to cope with special characters. This could be blamed on insufficient sanitization but there may be cases where special characters are desired in the displayed name. Additionally, many non-english languages will require support for Unicode. For these purposes Solr have some support, using the `ASCIIFoldingFilterFactory (50)` or `UnicodeCollation (51)`. These are not applied in the prototype however.
Another issue is the need to associate and infer additional information to the property name, such as property type. The variation of property types may for example be used for varying visualization and interaction methods, so that properties with number values may use sliders. Associating additional data to field names may require creating additional entries to the database. Such entries could also associate a display name with special characters with a look up name, without such characters, if needed.
**7.0.10 Comparison to current property view of Qalixa.com**
Previously, the current element for selecting among properties and property values were described briefly (See fig 1). Analyzing the current solution while taking inspiration from the suggested usability heuristics we can make the following observation. For the task of allowing the user to select preferences given applicable properties the current solution present a number of visual elements roughly equivalent to the number of properties times the number of values on average. The proposed solution present a number of visual elements roughly equivalent to the number
of properties plus the number of selected properties plus the number of values of the property currently being in focus. This amount is strictly smaller in general and is expected to be significantly smaller for most cases (See fig 16).
Furthermore, the ratio of visual elements shown clearly to the user, high up on the page, that are of direct interest versus such elements that are of secondary interest, should also have improved significantly. Filters corresponding to the users preferences are maintained in plain site and does not remain among the properties in the property list.
8 Discussion
The prototype tested a few of the ideas presented to some degree. Creation of custom filters where tried to some extent, but discarded due to adding little purpose to the prototype in its current form. The most prominent concept in the prototype is that of the filter tray. Dynamic filter lists were used as well although it was hard to make conclusions about its usefulness.
Apart from the suggestions, the project served to investigate the potential of Solr and similar technologies for Qalixa and it brought a variety of new lessons on web technology, such Javascript frameworks, databases and search engines, to its author.
Given that relevant properties can be found in the property list and given that they house sensible values, some observations can be made about the benefits of the prototype and its approach.
Once properties have been selected, distance in between useful components is small. This is beneficial for exploring small variations on the same properties. Having sliders for number properties allows for fast exploration. Having low latency for queries when switching properties on and off allows for a fast feedback loop on what selections fall off as the scope is narrowed.
Also it can be seen that the set of results are quickly narrowed down as filter scope is narrowed down. The problem of drop off due to false negatives will still be a problem even with fuzzy faceted search though. There will be likely be a bias towards articles that has well defined properties. Although with fuzzy faceted search, recall should become sufficient. Precision due to variations in how article properties are defined are to be expected. On the other hand it is arguably so that an article is more likely to be incorrectly excluded/demoted than included/promoted when filtering on a property. Overlooking the act of setting a property seem more likely than setting the property wrongly.
It is observed that being able to quickly alter between adjusting various property value scopes is useful. This is partially accomplished by the prototype. Although the later prototype only allows having one active, as in adjustable, filter at a time. Earlier snapshots allowed filters to be expanded and collapsed at will. This makes the scene slightly less complex but also hinders switching between filters quickly.
Having interesting filters clearly separated from the overall property list also seem to make user preferences manageable. Although here is an ambiguity regarding the persistence of filters. What should happen when the filters are no longer relevant for the search phrase? The current behavior of the prototype is to remove such filters. This can sometimes be problematic. A better choice is perhaps to render those filters inactive and mark them as such. Allowing the user to remove them at will.
There was an early attempt at allowing the user to select filters by drag and drop, as suggested by the suggested concepts above. This was deemed to supply little value and some distraction for the user, and was discarded in favor of simply clicking elements for turning them into filters. A more complex web of possible interaction potentials between various components, where elements can move between various parts to a greater extent and in a meaningful way, could make drag and drop useful again.
The order of the result list is in the prototype determined by the order of used filters in the filter list. While this may be a useful feature in some edge cases, it is deemed that this is not a practical default behavior that communicates badly to a user. Having explicit controls to enable this behavior may be useful, although it is likely that the de facto default behavior of most result lists out in the wild, to sort if the column header is clicked on, is more intuitive due to its strong history. However, the method suggested here have the potential advantage of sorting on multiple criteria. This could be useful but likely only in periphery use cases.
8.1 Suggestions, for academia, industry and for Qalixa
To establish structure and coherency of the meta data, a few things are suggested for further investigation and for solving related problems in broad terms.
8.1.1 Pruning and sorting properties using statistics and traits
Two methods for grouping properties are recognized. Properties can be grouped by categories and grouped by traits. Category here denoting and assuming a preexisting grouping of articles in a category tree. Such structure can be carried over to the properties so that each category is tied to the categories that occur in the articles that are sorted into it. If a category has been selected by the user, properties can be shown based on whether they belong to that category as properties that exist outside the category are likely irrelevant. Further, properties that are part of a subcategory or a smaller subset of articles in the current category may not be very relevant either as a filter that would use such category would only be applicable on a small part of the articles. A generalization of this idea would be that of finding applicability of facet properties in a given selection, regardless of having a stated category. Bear in mind here that it might not only be properties that can be found in the current selection that might be of interest. Similarly to how non-selected values may be of interest, non-used properties may be of interest too, once the user looks to expand his/her preferences rather than diminishing them.
We could introduce a notion called *facet traits*. A facet property may have more or less values, and values may be of certain types. A property conveys a number of applicable values for a given selection, the values that exist in the selection, but other values that may be of interest may exist throughout the database. Facet traits may be made to connote the characteristics of the its values, throughout the database on one hand, and given a selected context, on the other. Such traits can be used to further determine how a property is to be presented. The *facet trait* idea can also be extended to connote the above described property characteristics. To how large extent of the current selection does the property exist? To how large extent of the entire database and in what categories does the property exist? So such traits are in part consistent regardless of context and in part context dependent.
Having such trait information available could help in building the presentation and allow qualified decisions on what to present and how to build user interface elements. Examples of using such traits could be:
- Turning single valued facets into keywords
- Hiding facets that exist for a small subset
- Show facets that does not qualify for the current selection but for a relevant superset, for example a category
### 8.1.2 Controlled folksonomies
Having folksonomies, or wiki-like tag systems can be a useful way to establish structure in a large data set where items are interacted with by users on a regular basis. But to find the best way to harness it can be a challenge. (52) explain different algorithmic methods for extracting good tag sets from folksonomies. Taking synonyms and similar problems into account.
### 8.1.3 Client side facet search
In order to elevate the potential of facet search, and to allow fluid filtering for the user even if the server load is considerable, using client side facet search could be desirable. There are implementations for this available that show great promise. While client side search could not realistically cover the breadth of a server based solution, it can serve as a complement that works on narrowing down the set of results given by the server. The server could be allowed extended latency and fill up with new results only when the client side result set is draining out. (53) (54)
### 8.1.4 Grouping facets by outsourced semantics
One promising approach to enhance structure is to utilize outside sources of semantic structure, so called semantic web technologies. For example, DbPedia (55) is an initiative for extracting semantic data from Wikipedia. A derived application, DbPedia Spotlight, can infer context to words in a text. Inspiration can also be taken from the way a prototype, Sztakipedia (56), helps the user by suggesting meta data to add to an article while being worked on. Such concept could be used in conjunction with the percolation concept suggested earlier.
8.1.5 Annotating rather than excluding invalid filters and values
The prototype combine different approaches in order to deal with invalid filters and values. Filters are removed when no longer needed. Values can either update dynamically or remain untouched. Such approaches are arguably confusing. A better solution, until a comprehensive solution is found, would be to mark options depending on the effect using them would have. Unused but selected filters that represent fields no longer found in the result set could be marked red for invalidity. That way the user can easily backtrack if that property was of interest. Values of an active filter that represent articles that are not in the result set could be marked blue to communicate a potential addition.
8.1.6 Multi touch interaction for facet navigation
A suggestion made by (57) is to use multi touch devices for allowing additional levels of freedom when exploring facets. In the current prototype, only a single preference can be adjusted at a time. The concepts described above suggest allowing moving in multiple dimensions for allowing the user to explore multiple dimensions at once. By allowing adjusting multiple preferences at once, preferably with sliders, utilizing multi touch capability, another variation is possible. The method would however be limited to such capable devices.
9 Appendix
9.1 Source code
The source code for the prototype can be found at (58). It can be deployed and used by following the contained README. The instructions as well as known issues will be repeated below for completeness, but may not be up to date.
9.1.1 Summary of repository contents
All these components are designed to work alone reasonably well. They should depend on each other only run time wise. Data importer depends on Solr. Grouping component depends on the data importer. Client depends on grouping component (although search should work without grouping, possibly you need to disable grouping in Solr config).
Flat table import handler (server/customComponents/dataImport) For importing data from MySql database to Solr
Field grouping component (server/customComponents/fieldGroupingComponent) For clustering of attributes when submitting a search query. Has two grouping algorithms that are switchable by a constant in the code (FieldGroupingComponent.groupingScheme).
9.1.2 Known issues
As far as known, all of the known buggy behaviors can be escaped by reloading the page.
- The gui does not update if the result of the query was null
- Clustering algorithm is throwing exceptions for some (small or empty?) data sets, causing failure to update
- Selecting some attribute values containing whitespaces or unconventional symbols may cause malformed query to solr, failure to update or wrong results
- Submitting an empty phrase (in search field) will return a result
- The maximum number of imports processed by the data importer is currently hard-set by an if statement in its main while loop. (Intentional for debug purposes)
- The multi valued-ness of properties currently needs to be set in two places in the solr configuration. In the configuration of the import handler and in the schema. It would be better if it could be set only in the schema.
- For some queries Solr responds with a null object. However, at other times ‘no documents found’ results in a result object with no
- The current schema produce a extremely large database. Consider tweaking the schema and removing long descriptions from index. It is currently not possible to save the entire database to disk.
9.1.3 Set up
9.1.4 Assumptions
- apache 2 running on machine
- mysql 5.6>= running on machine with database adherent to config in server/solrconfig
9.1.5 How to set up server
- fetch submodules
- run getSolr in server folder
- run runServer to start server
- fill up the database by making data import query to solr: localhost:8983/solr/flattable-dataimport?command=full-import
- Check its progress by visiting: localhost:8983/solr/flattable-dataimport
- If the data is begining to take up too much space, you can abort and commit with localhost:8983/solr/flattable-dataimport?command=stop
How to set up client
- Configure apache to point at the client folder
- Go to localhost in the browser
- Use the gui!
Build data importer and grouping module
- You should not need to do this to run. There are pre built jar-binaries in server/solrconfig/lib.
- If you want to update the jars. Run the ant build scripts in each eclipse project, default target.
9.2 Additional papers
Here are a few sources for relevant papers that were discovered to late for being utilized in the paper; they should be evaluated in future projects:
- Symposium on Human-Computer Interaction and Information Retrieval (59)
Papers that were not included but may be of interest are:
- (60) propose a system for mapping between taxonomies of products in e-commerce applications.
- (61) deals with interoperability of databases by correlating schemas using similarity measures.
- (62) describe optimizations for spectral clustering, a method for allowing dimensionality reduction that may be relevant for cases where the input set consist of a large adjacency list.
10 Citations
32
57. FJELD, Morten. 2013. S.l.: personal communication.
|
{"Source-Url": "http://publications.lib.chalmers.se/records/fulltext/183771/183771.pdf", "len_cl100k_base": 14254, "olmocr-version": "0.1.49", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 68047, "total-output-tokens": 19474, "length": "2e13", "weborganizer": {"__label__adult": 0.0003154277801513672, "__label__art_design": 0.0012617111206054688, "__label__crime_law": 0.0003342628479003906, "__label__education_jobs": 0.005611419677734375, "__label__entertainment": 0.00019860267639160156, "__label__fashion_beauty": 0.0001996755599975586, "__label__finance_business": 0.0010995864868164062, "__label__food_dining": 0.00028133392333984375, "__label__games": 0.0007147789001464844, "__label__hardware": 0.0007066726684570312, "__label__health": 0.00026226043701171875, "__label__history": 0.0006685256958007812, "__label__home_hobbies": 0.0001506805419921875, "__label__industrial": 0.0003228187561035156, "__label__literature": 0.0007677078247070312, "__label__politics": 0.0002903938293457031, "__label__religion": 0.0004019737243652344, "__label__science_tech": 0.05438232421875, "__label__social_life": 0.00017821788787841797, "__label__software": 0.06689453125, "__label__software_dev": 0.8642578125, "__label__sports_fitness": 0.00015497207641601562, "__label__transportation": 0.0004057884216308594, "__label__travel": 0.00024628639221191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 82235, 0.04098]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 82235, 0.2831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 82235, 0.90604]], "google_gemma-3-12b-it_contains_pii": [[0, 397, false], [397, 2030, null], [2030, 3239, null], [3239, 6225, null], [6225, 9178, null], [9178, 11746, null], [11746, 14596, null], [14596, 18154, null], [18154, 20994, null], [20994, 24303, null], [24303, 27458, null], [27458, 30536, null], [30536, 31983, null], [31983, 32054, null], [32054, 35345, null], [35345, 36953, null], [36953, 37048, null], [37048, 38662, null], [38662, 38731, null], [38731, 41031, null], [41031, 42754, null], [42754, 42914, null], [42914, 45381, null], [45381, 48125, null], [48125, 50907, null], [50907, 54753, null], [54753, 57380, null], [57380, 59886, null], [59886, 63491, null], [63491, 66451, null], [66451, 68936, null], [68936, 70867, null], [70867, 73412, null], [73412, 76395, null], [76395, 79340, null], [79340, 82235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 397, true], [397, 2030, null], [2030, 3239, null], [3239, 6225, null], [6225, 9178, null], [9178, 11746, null], [11746, 14596, null], [14596, 18154, null], [18154, 20994, null], [20994, 24303, null], [24303, 27458, null], [27458, 30536, null], [30536, 31983, null], [31983, 32054, null], [32054, 35345, null], [35345, 36953, null], [36953, 37048, null], [37048, 38662, null], [38662, 38731, null], [38731, 41031, null], [41031, 42754, null], [42754, 42914, null], [42914, 45381, null], [45381, 48125, null], [48125, 50907, null], [50907, 54753, null], [54753, 57380, null], [57380, 59886, null], [59886, 63491, null], [63491, 66451, null], [66451, 68936, null], [68936, 70867, null], [70867, 73412, null], [73412, 76395, null], [76395, 79340, null], [79340, 82235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 82235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 82235, null]], "pdf_page_numbers": [[0, 397, 1], [397, 2030, 2], [2030, 3239, 3], [3239, 6225, 4], [6225, 9178, 5], [9178, 11746, 6], [11746, 14596, 7], [14596, 18154, 8], [18154, 20994, 9], [20994, 24303, 10], [24303, 27458, 11], [27458, 30536, 12], [30536, 31983, 13], [31983, 32054, 14], [32054, 35345, 15], [35345, 36953, 16], [36953, 37048, 17], [37048, 38662, 18], [38662, 38731, 19], [38731, 41031, 20], [41031, 42754, 21], [42754, 42914, 22], [42914, 45381, 23], [45381, 48125, 24], [48125, 50907, 25], [50907, 54753, 26], [54753, 57380, 27], [57380, 59886, 28], [59886, 63491, 29], [63491, 66451, 30], [66451, 68936, 31], [68936, 70867, 32], [70867, 73412, 33], [73412, 76395, 34], [76395, 79340, 35], [79340, 82235, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 82235, 0.02083]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
434803b3e32c2e3e27b99bc18aee6b5ad3c30e2c
|
SNO Source Manipulator Control Code
T. J. Radcliffe
Department of Physics,
Queen's University at Kingston,
K7L 3N6, Canada
1 Introduction:
This document describes a set of Hardware-derived classes for control of the SNO calibration source manipulator. It is intended as a guide to the code, which is heavily documented internally, rather than an exhaustive description. Please see A HARDWARE CONTROL CLASS HIERARCHY for details on the Hardware classes. The calibration source manipulator is an over-constrained physical system: when moving in a single plane it has three ropes attached to it, and when moving out of plane it has five ropes attached to it. The primary task of the control code is to deal with possibly conflicting constraints as intelligently as possible. Note that the way dealing with this problem naturally incorporated the possibility of out-of-plane motion: nothing has been added to the code to support this capability.
A secondary task of the code described in this document is to communicate with the Data Acquisition computer (DAQ.) This is done via a TCP connection over an Ethernet line. The connection must be able to carry commands from the DAQ to the calibration computer and return status information from the calibration computer to the DAQ. The DAQ is a Macintosh, the calibration computer is an IBM PC clone.
The Hardware class hierarchy does not support multiple inheritance, so the class hierarchy described here, which is a set of Hardware-derived classes, consists of classes that contain each other rather than inherit from each other. For the most part this is appropriate: an Axis contains a Motor, an Encoder and a Loadcell, rather than is a Motor, an Encoder and a Loadcell. The term “class hierarchy” is used throughout to include both “has a” and “is a” relationships.
Section 2 describes the philosophy behind the classes developed for manipulator control. Section 3 describes the sort of thing the system is intended
to do, including some of the things it can’t yet do, but should. Following this
is a section which gives an overview of the code structure. After this comes
series of sections dealing with the Hardware-derived classes in descending
order of abstraction, starting with the PolyAxis class and winding up with
the low-level classes that deal directly with boards and chips. Section 18
discusses the main program that puts all this stuff together at the moment.
Section 15 describes the Server class and the work that still needs to be
done on it. Section 19 describes the Borland C++ IDE settings required
to compile the code. Section 16 describes various utility functions for doing
things like dealing with object databases. Section 20 deals with the oft ne-
glected but ever important subject of co-ordinate systems. The final section
discusses changes and improvements that are still required before the code
goes underground.
Note on capitalization etc.: there is a widespread practice in object-
oriented programming circles for the first letter of class names to be upper
case, and the first letter of object names to be lower case. This is a practice
that is slowly being built into the existing Hardware-derived code, and I
heartily encourage everyone to follow it. It may be mindless conformance to
arbitrary, socially defined conventions, but then again, so is speaking English.
In this document, class names will be mostly bold face and function names
will be mostly italic. Code examples and variable names will be typewriter.
I tend to use “class” and “object” interchangeably, although one is collective
and the other is singular.
Note on standards: C++ is not very standardized just yet, and all of
this code was meant to run on IBM PC compatibles when compiled with the
Borland C++ compiler. Parts of it have also been compiled with the GNU
g++ compiler version 2.6.8 under LINUX, and I found that a few minor
changes had to be made to do this. When I refer to “ordinary C++” I mean
whatever the people at Borland think of as ordinary. If you try to port this
code to another platform or even another compiler on the same platform,
you may find a number of errors due to small differences in standards.
2 Code Organization and Philosophy
The code is organized in a fairly hierarchal way on the basis of how ab-
stractly each class interprets the raw bits passed to it from the hardware or
from classes that access the hardware. Looked at from a reductionist point
of view, everything a computer deals with is “really” a stream of bits, in the
same way everything around us is “really” a collection of atoms, uh, nucle-
on and electrons, uh, quarks and leptons, uh, supersymmetric technicolour
monads.... I put "really" in quotes because I think this is a stupid idea: stuff exists, and how we analyze it into concepts is up to us (in technical terms, I am a non-eliminative reductionist.) No particular level of abstraction or reduction is privileged. A stone is neither more nor less real than the atoms it is made of, as anyone who has stubbed a toe knows. The point of all this is that we are free, within certain constraints, to choose how we interpret the bits in a register or memory location. Different levels of abstraction are the result of interpreting the bits in different ways: low levels treat them as bits (possibly related to some chip or board-level function,) and higher levels treat them as being meaningfully related to some physical quantity like the pressure being measured by a transducer. Because no level of abstraction is privileged, there is no "one right choice" for dividing the code into abstract levels, and so some kind of objective design principles are needed to guide our decisions about where to put what kind of object in an abstract hierarchy.
The principles used in designing classes for controlling the manipulator result in roughly four levels of abstraction:
Level 1 classes that deal directly with hardware registers and treat register values simply as collections of bits
Level 2 classes that deal with hardware registers, but which impose an abstract meaning on the register values
Level 3 classes that deal with hardware registers only through Level 1 or 2 classes, and typically convert register values to physically meaningful numbers
Level 4+ classes that deal with hardware only through Level 3 or higher classes, and whose main function is to co-ordinate the actions of several Level 3 or higher objects.
Level 1 classes are those that represent a single chip or board, such as the AM9513 timing controller chip or the TIO10 board. These classes don't care what the values they write out to the boards mean apart from board-level or chip-level functionality. They contain no information that is in any way dependent upon the use to which the board or chip may be put.
Level 2 classes also deal with boards and chips directly, but they have an awareness of the meaning of various inputs and outputs. Thus, a Level 1 class might read a register and put the value into a variable called register1 but a Level 2 class would put it into a variable called counter if it was a counter value. Level 2 classes impose a meaning on the bits themselves without applying any transformation to them. Thus, a read from a particular
memory location may be recognized as an ADC output, but no commitment is made as to what might be going into the ADC input.
Level 3 classes typically describe single pieces of external hardware, such as a motor or a position encoder or other instrument. They contain Level 1 and 2 objects to access the hardware, as well as higher level information about the meaning of bits sent and received from those objects. Thus, the output of a counter may be interpreted as a shaft position, for instance, which allows other objects to ask the encoder sensible things like “What position are you reading now?” rather than “What are the bytes in register base+offset, uh, how do I convert those to a floating point value that represents position relative to, uh, what was the zeropoint again....?” Level 3 classes implement transformations – via some calibration constants – of raw input data into physically meaningful terms.
Level 4+ classes describe systems of Level 3 objects or above. Two examples from the manipulator control classes are the Axis class and the PolyAxis class. Each Axis consists of a Motor and an Encoder and a Loadcell as well as various state information. A PolyAxis consists of an array of Axis objects and related state information. The Encoder and Loadcell classes themselves are Level 3 classes.
The notation “Level 4+” is refers to all classes of Level 4 or higher; at this point in the hierarchy a class’s level is one higher that of the highest-level class it contains. It is a very bad idea to create classes that cannot be assigned an unambiguous level. If two classes contain references to each other it is impossible to assign them to a level within this scheme because they can’t both be one level higher than the other. This means you have created two structures that describe a single concept, and you should think about re-writing them so the dependencies sort themselves out at lower levels.
Level 4+ classes are where the interesting parts of the control algorithm is typically implemented. Level 3 classes can do things like set a single motor moving, and perhaps tell it to stop after a predetermined time or number of steps, but they cannot implement feedback control because they only describe a single bit of hardware: one can’t describe both a motor and a position encoder, for instance, or it would be a Level 4 class.
One could break this scheme, and write a single class that accessed both a motor and a position encoder simultaneously, but this would radically reduce the modularity of the code. It would become impossible to use the code to control a motor in the absence of an encoder (which is useful at least for debugging purposes) and impossible to use an encoder without a motor. Low coupling, or high modularity, is one of the most powerful aspects of object-oriented programming, and the levels described here can serve as a
set of design principles that should ensure that the highest possible degree of modularity is retained.
Note that some low-level classes were written before the level scheme described here was imposed on the code, and so there may be minor deviations from this scheme in those classes.
3 Desired Functionality
The purpose of the manipulator control classes is to allow a source to moved around inside the SNO detector at the request of the DAQ system, to report the status of the source back to the DAQ system upon request, and to ensure that the chance of damage to the detector or loss of a source inside the vessel is the minimum possible. The two basic disasters that can happen are the tension becoming too large on a rope and a rope going slack. The first of these is of relatively minor concern from the point of view of software: the mechanical design of the system is intended to prevent excessive forces from acting on the acrylic vessel; the motors are supposed to stall well before the tension is in the danger zone. This has not been tested.
A rope going slack is the major danger, as it can lead to tangling, which may prevent a source from being lifted out of the vessel. This would be very bad. A set of air-cylinder tensioners has been added to the mechanical system to help prevent this, but I am still not convinced that it is adequate to do the job. The mechanics of the system must undergo many weeks of reliability testing prior to going underground. The control algorithm, described in detail in Section 6 is intended to be a backup to the mechanical systems that prevent slack ropes. Ideally, one would like a tensioning system that only came into play when the tensions were near the allowed bounds. The air-cylinder system does not do this: it looks like a spring with a discontinuously variable spring constant. When a source is stationary the air-cylinder system is in equilibrium. When the manipulator starts to move a source it increases the tension on some ropes, decreases it on others. This causes the air-cylinders to move. In their simplest mode, the air cylinders look like constant force devices within the bounds of the limit switches, although the finite reservoir size gives them some spring-like characteristics as well. In a typical move the rope tensions change until the air-cylinder limit switches are reached, and then the fresh flow of air brings the cylinders back into the middle equilibrium position again. This looks to the control algorithm like a mysterious source of movement, as the air-cylinders are between the motors and the position encoders, and this may cause the system to generate a warning about the possibility of motor stall or a slipping encoder shaft.
The communications aspects of the manipulator control classes break into two parts: command processing and communications. The command processing part is handled by the Hardware framework as described in A HARDWARE CONTROL CLASS HIERARCHY. The communications part is supposed to be handled by the Server class, but this is not yet fully implemented.
4 Overview of the Manipulator Classes
A schematic of the manipulator control classes is shown in Figure 1. At the highest level is the PolyAxis class. As its name suggests it is for dealing with multiple Axis objects and co-ordinating their movement. An Axis object consists of a Motor, an Encoder and a Loadcell object and thus constitutes the primary object for feedback control of a single rope. The job of the PolyAxis object is to determine the position of the source and set the error inputs to the feedback algorithm in each Axis object. The Axis objects that constitute a PolyAxis are stored in an array which will typically have three or five elements for in-plane or out-of-plane motion respectively.
PolyAxis also contains an AV object, which supplies it with the current state of the acrylic vessel as measured by the AVPsense system. The hardware interface classes for this system are shown in a vapour cloud because they have not been written yet, as the DataConcentrator system for interfacing these boards into the PC is not yet complete. Note that when this hardware becomes available the DigitalChannel and AnalogChannel classes will have to be substantially re-written as well. It may be worthwhile to substantially increase the degree of modularity in these classes at that time. In particular, the card that fits into the PC backplane, the DataConcentrator backplane card and the DataConcentrator cards themselves should all have their own classes. This was not done in the current incarnation because it would just have to be redone completely in the next incarnation.
As mentioned above, the Axis class contains the Level 3 classes Motor, Encoder and Loadcell. Because the Hardware class hierarchy does not support multiple inheritance these subclasses are contained in - rather than superclasses of - the Axis class. They themselves communicate with the hardware via Level 2 classes like AnalogChannel and Level 1 classes like TIO10. There is at the moment only one TIO10 card in the system. This is a general purpose National Instruments timing control card. It supplies both a globally accessible realtime clock and eight clock outputs for driving motors. The TIO10 board includes, a pair of Advanced Micro Devices
AM9513 timer/counter chips and a pair of Motorola MC6821 Peripheral Interface Adapter (PIA) chips. The TIO10 class contains objects to describe these chips, and the AM9513 class includes a set of five AM9513Channel objects that correspond to the five channels available on each chip. These chips and channels are referenced by other objects in the class hierarchy.
The purpose of the Axis class is to provide feedback control to a motor based on either error signals set by a PolyAxis object that contains that Axis or based on a simple feedback algorithm internal to the Axis class. The latter allows standalone operation. One of the basic constraints on Axis design was that no Axis object should have to know what any other Axis object is doing, thus maintaining hierarchy as described in Section 2. The Axis class implements different control algorithms depending on the control mode (i.e. PolyAxis or standalone). In standalone mode the algorithm is a simple position feedback algorithm that updates the number of steps it wants the motor to take until it gets close to the end, and then it just lets the motor complete the travel itself. The Motor object in this case is in step control mode, in which it tries to match the actual number of steps taken with the desired number, which is set by the Axis object. When the actual number matches the desired number the Motor stops itself.
When an Axis is in PolyAxis control mode it uses a fuzzy logic algorithm to control the speed of its motor. In this case the motor is in velocity control mode. The fuzzy logic algorithm, discussed in detail in Section 6 is a nonlinear PID controller; the fuzzy logic aspect is more of a design tool than an implementation tool in this case.
The Encoder and Loadcell objects deal with calibration constants as well as hardware registers. They access hardware via the DigitalChannel and AnalogChannel objects respectively. These Level 2 objects currently mix up different hierarchical levels fairly badly, and need to be re-written when new hardware is put in place in any case. A strong case can be made for breaking them up into separate PCcard, DCcard and DCinterface classes.
The Server class is shown at Level 3, as it accesses hardware via lower-level entities from the PCTCP socket library. It is part of the Hardware class hierarchy because it needs to be polled periodically to ensure communication with the DAQ system is maintained. At the moment the Server class is in an incomplete state: it appears to work in a standalone test setup, but does not behave properly in the manipulator control program.
The Clock class has global scope and is designed to return the time in seconds since the program started running. This time is a double precision real number. As time is a continuous global quantity the principle of verisimilitude argues that the object that supplies knowledge of it should also
be global and represent that quantity as continuously as possible. Once the Server class is made to work properly it may be worth adding the capability of referencing the Clock time to the GPS clock as seen by the DAQ. The Clock class uses two channels on the TIO10 board to count ticks of the tio10's 5 MHz clock. The TIO10 object itself has to have global scope so that it can be defined prior to the Clock object.
There is also a global Display class that is not part of the Hardware class hierarchy as it is effectively polled at the system level. It serves as a standard output interface for all the Hardware-derived objects using various DOS screen-control functions. There are a number of helper objects, such as the Keyboard class, that deal with I/O and other management problems. Many of these are implemented in ordinary C subroutines.
The main program defines an AV object, a TIO10 object and a Clock object as well as a PolyAxis object. The PolyAxis constructor takes care of calling constructors for all the lower level objects, as discussed below, but takes pointers to the AV object and the TIO10 object as arguments. The main program loop consists of an inquiry to the Keyboard object to see if a complete command string is available, where “complete” means it ends in a carriage return. If one is, then the command string is passed to the Hardware::doCommand() routine for parsing and, if possible, execution. On every pass through the main loop the Hardware::doPoll() routine is also called, which runs the poll() member function of all Hardware-derived objects. At the moment the main program also displays various status information about the PolyAxis object, such as the estimated position of the source and the residual force on it.
5 The PolyAxis Class
The PolyAxis class consists of an array of Axis objects, and AV object and collection of information about the source. Each PolyAxis object has its own entry in the database file POLYAXIS.dat. This entry is read by the constructor based on the PolyAxis name. The constructor also takes as arguments pointers to the AV object and the TIO10 object that are created at the top level. The format for the database entries is shown in Table 1.
The first line in the database file is the version identifier, which is the string VERSION followed by 1.00 with no spaces. This is matched with a version number supplied by the code to ensure the format matches what is expected. The version number is local to each object type, so that one may be updated without affecting the others. A PolyAxis entry is identified
by the string POLYAXIS: followed by a name. The name comparison is case
insensitive. Following the PolyAxis name is a list of Axis object names.
These names are used to find Axis objects in their database file (see Sec-
tion 6.) The POSITION: line gives the most recently known source position
in centimetres from the centre of the detector co-ordinate system. This entry
is updated once per second by the PolyAxis poll() routine when the source
is moving, and once every ten seconds when the source is stationary, so the
PolyAxis object will know where the source should be when it starts up.
If for some reason you move the source by hand you will have to change
this entry of the source-finding routine of PolyAxis will probably fail. The
last two lines give physical parameters of the source: its mass and volume.
These are needed to estimate the forces acting on the source. For sources in
air, just set the volume to zero to eliminate the buoyancy correction. The
density of D₂O is taken to be 1.10 g/cm³. The mass and volume of the source
should have their units specified (the mass can be in g or kg, the volume in
cm⁻³, cm**³, cc, 1, m⁻³ or m**³.) Masses are converted internally to
kg, volumes to cm³.
The PolyAxis object has the responsibility of figuring out where the
source is and how to change the lengths of the various ropes to get it some-
where else. The Axis objects have the responsibility of changing the rope
lengths while maintaining rope tensions within their upper and lower bounds.
The basic tool the PolyAxis object uses to track the source is the func-
tion findPositionL() which finds the position of the source by looking at the
lengths of the Axis ropes. For the best estimate of the source position the
sum-squared length error is a minimum. The minimization is done by iterat-
ing on a linearized version of the problem, which increases speed a reliability over doing the full non-linear minimization. Various approaches to non-linear minimization were taken prior to settling on the iterated linear algorithm. The Marquardt-Levenberg algorithm was tried, but the surface is funnel-shaped with very small parabolic bottom. The M-L algorithm wandered badly: a bad parabolic step was followed by a good steepest-descents step that failed to get close enough to the parabolic region for the next parabolic step to be any good. The downhill simplex algorithm was also tried (amoeba() from NUMERICAL RECIPES IN C) but was too slow for reliable control. A simple steepest-descents algorithm was used with some success, but it was neither as robust nor as fast as the iterated linear algorithm.
The iterated linear algorithm begins with the equations:
\[
L_i^2 = \sum_{i=1}^{axisNumber} \left( L_i - (\vec{x}_i - \vec{x}_s - \vec{\Delta}_i) \times \hat{u}_i - (\vec{x}_b_i - \vec{x}_s - \vec{\Delta}_i) \times \hat{u}_b \right)^2 \tag{1}
\]
where \( L_i^2 \) is the squared length residual, \( \vec{x}_i \) is the position of the top pulley of axis \( i \), \( \vec{x}_b_i \) is the position of the bottom attachment point of axis \( i \), \( \vec{\Delta}_i \) is the offset of the source-carriage pulley from the source-carriage centre point, and \( \hat{u}_i \) and \( \hat{u}_b \) are the directions of the top and bottom segments of the rope. For the central rope, which does not have an attachment point in the vessel, \( \vec{x}_b \) is equal to \( \vec{x}_s \), and \( \vec{\Delta} \) and \( \hat{u}_b \) are zero, so the second term drops out. \( L_i \) is the measured length of the \( i^{th} \) rope. The second and third terms in the equation will be readily recognized as the lengths of the rope above and below the source. The sum of these terms for a given axis is just the total length of rope expected for the source at position \( \vec{x}_s \). A schematic representation of these quantities is shown in Figure 2.
The minimization is carried out by taking the derivative of the squared length residual with respect to the each direction with the assumption that the directions of the upper and lower sections of rope remains constant while the source position changes. This results in the set of linear equations for the source position:
\[
\begin{bmatrix}
-2(\hat{u}_t + \hat{u}_b)(\dot{u}_t + \dot{u}_b) \\
-2(\dot{u}_t + \dot{v}_b)(\dot{u}_t + \dot{u}_b) \\
-2(\dot{w}_t + \dot{w}_b)(\dot{u}_t + \dot{u}_b)
\end{bmatrix}
\begin{bmatrix}
\Delta t \\
\Delta v \\
\Delta w
\end{bmatrix}
= 2(\dot{u}_t + \dot{u}_b)(L - (\vec{x}_t - \vec{\Delta})\dot{u}_t - (\vec{x}_b - \vec{\Delta})\dot{u}_b) \tag{2}
\]
where for each term a sum over \( i \) is implicit.
Solving these linear equations is handled by the ThreeVector class. The solution has one problem: for motion in a single plane there is a tendency
for the solution to wander out-of-plane to make up for any errors in the rope lengths. This is dealt with by the relatively harsh expedient of zeroing any components of the equations that are out-of-plane. This is done by the PolyAxis constructor creating a vector called freeze during startup that is used to mask off any out-of-plane components. freeze has a zero component for any direction that has a sum of absolute values of the bottom attachments of less than 1 cm. If the acrylic vessel moves significantly it may be necessary to relax this standard somewhat.
The linearized equations are iterated until the change in source position between iterations is less than 0.1 cm. Ideally one would like to minimize the tension error at the same time as the length error (in particular, this would eliminate the problem of wandering out-of-plane.) Unfortunately, I haven’t been able to figure out how to cast the tension equations in a similarly linearized form, so there is no set of grand linear equations to do this particular job.
As well as knowing what the source position is, the PolyAxis object has the responsibility of changing it. This is done using the to() member function, which takes a string argument containing the position in three-space of the desired position. A few simple tests are applied to this position to ensure it is inside the vessel and can at be reached, at least in theory, without violating any tension constraints. If the point meets the constraints a path is generated from the current position to the final position. This path consists of at most two straight line segments. If the start and end positions are either both in the vessel or both in the neck only one segment is generated. If the source is to move from the vessel into the neck or vise versa then two segments are generated: one to a point just below the neck, the other away from this point to the end position. There is a member function called neckIntersection() that determines if and where a rope intersects the neck ring. It is based on the assumption that there is no friction between the neck ring and the rope, so that the rope will always lie in a plane that contains a radius of the neck ring.
Prior to returning control the main loop, to() calls the polyActivate() member function of each of the Axis objects that make up the PolyAxis. This puts the Motor object in each Axis into velocity control mode, and places them in standby mode. It also sets the expected length for each rope, turns off command acceptance for the Axis and its sub-objects, and initializes some arrays used to store past values of length and tension for rate-of-change calculations needed for damping rules (see Section 6 for details.) The program is then returned to the main loop, and the rest of the PolyAxis control sequence is carried out by the poll() function.
The basic tasks of the PolyAxis poll() function are to move a point along the line segments calculated by to() and to set up error values for each Axis object based on the difference between the position of that point and the estimated position of the source. The point is the "hare" of a hare-and-hound controller, with the source itself playing the role of hound. The point is moved according to a hardcoded velocity profile such that the point velocity increases linearly for the first 20 cm of path to a maximum of 2.0 cm/s, and decreases similarly at the end of the path. For paths that turn at the neck, the source is brought to a stop at just below the neck, and then a similar velocity profile is followed for the second part of the path.
The Axis object errors are set by the function errorSignal(). There is a related function errorSignalAll() that will be discussed in more detail below. errorSignal() sets both error tensions and length. The error lengths are simply the difference between the actual length and the desired length of any given rope for the current point position. The error tensions are set differently depending on the status of the rope: if a rope has a bottom attachment that is on the far side of the central axis relative to the current source position, the error tension is set in such a way as to move the rope toward a tension of 10 N. The error tensions of the other ropes are set to zero, which is a signal for the code to let them take on whatever values they like. This mechanism of setting the off-side rope tensions to 10 N effectively makes them idlers, or nearly so, and brings the control algorithm back into the realm of constrained rather than over-constrained systems.
The function errorSignalAll() is used to set errors for all ropes, and is useful for attempts at full over-constrained control. It was found that the loadcells are not generally accurate enough to do this effectively. Note that there are two rather similar terms in the code that are really quite different: and errorLength is the difference between the actual length and the desired length, and a lengthResidual is the amount a rope contributes to the RMS residual that was minimized to find the source position. All errors are defined to be (actual - desired) but the residual length is defined with the opposite sign to make the derivatives work out properly.
There a number of sloppy usages in the code: the terms distance, position and length are used interchangeably in some cases and with distinct meanings in others. Force and tension are also used interchangeably sometimes and not others. There may be some idle functionality left over from earlier versions of the code, although I've tried to eliminate this where it seemed likely it would never be needed again.
The list of commands the PolyAxis object will accept from the keyboard is shown in Table 2. Required arguments are given following the command.
Table 2: PolyAxis commands
<table>
<thead>
<tr>
<th>Command</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>to [x y z]</td>
<td>move to (x,y,z)</td>
</tr>
<tr>
<td>tensionFind</td>
<td>find and display position based on tensions</td>
</tr>
<tr>
<td>tensions</td>
<td>find and display desired tensions</td>
</tr>
<tr>
<td>netForce <x y z></td>
<td>find and display net force magnitude at (x,y,z) or current position if none given</td>
</tr>
<tr>
<td>pattern [fileName]</td>
<td>run through a pattern of endpoints</td>
</tr>
<tr>
<td>stop</td>
<td>stop all motors</td>
</tr>
<tr>
<td>tensionStop</td>
<td>toggle stop-on-tension condition</td>
</tr>
</tbody>
</table>
name in square brackets, optional arguments in pointed brackets.
The tensionStop command toggles a flag whose state determines how the code reacts to tension-out-of-bounds conditions. If the tension goes out of bounds this flag can be unset to allow individual Axis objects to be controlled without the PolyAxis object stopping the whole show on every poll. This flag is set automatically whenever the to() function is called, to provide some protection against forgetting reset it.
The pattern command allows the source to run through a pre-defined pattern of points listed in the file given. The first line of the file is the name of the pattern log file to be used, which records where the source stopped for each endpoint and why. The rest of the lines are endpoints. The pattern following code moves to an endpoint, pauses for about 10 seconds, then moves on until the pattern is complete.
One of the more problematic aspects of the control algorithm is knowing when to quit. There are five stop conditions:
**ENDPOINT_STOP** source is within the lesser of END_ERROR and distErr of the endpoint, but in no case is the endpoint condition tighter than 0.1 cm. distErr is the RMS length residual from all Axis objects.
**STUCK_STOP** source is more than 1/2 way to the endpoint and hasn’t gotten any closer in the last 10 seconds.
**NET_FORCE_STOP** NET_FORCE_LIMIT has been exceeded
**LOW_TENSION_STOP** tension is below LOW_TENSION on an Axis
**HIGH_TENSION_STOP** tension is above MAX_TENSION on an Axis
Depending on the frictional forces and hydrodynamic damping in the final system some of these conditions may need to be changed. The numerical values for the various quantities like END_ERROR are defined in the file PolyAxis.h.
6 The Axis Class
The Axis class is home of the fuzzy logic system that tries to maintain the Axis conditions such that the error terms set by the PolyAxis code are kept small. It also handles the tension constraints and includes damping terms in both length and tension to improve the stability of the system. The Axis constructor is typically called by the PolyAxis object. It is passed the name of the Axis, a pointer to the AV object that exists at the top level scope and a reference to the TIO10 object that also exists at the top level.
The database utilities search the file AXIS.dat for the named Axis, and then read the fields shown in Table 3. The MOTOR: LOADCELL: and ENCODER: fields name the Motor, Loadcell and Encoder objects that are part of this Axis. The LENGTH: is the length of the rope from the upper pulley to the bottom end. The DEADLENGTH: is the length of rope between the spindle and the top pulley. This length should be small in the final installation, but is large in the prototype. It is used by the code that corrects the length of the rope for the effects of elasticity. Note that the Axis object should always use the length returned by Axis::getLength(void), which includes a correction for rope stretch, and not Encoder::length(void) which returns the unstretched length. The OFFSET: is the offset in centimetres from the central point of the source carriage to the centre of the pulley on the source carriage through which the rope passes. At the moment the rope length calculation does not account for the varying amount of rope passed around the source carriage pulley as a function of distance from the central axis of the detector. The TOP: and BOTTOM: positions are the locations of the point where the rope comes off the top pulley and where it meets the bottom attachment point. For a central rope there is no BOTTOM: attachment point given, and the OFFSET: is zero. The LENGTH: field of the Axis object database entry is updated every second when the Axis is moving and every ten seconds when it is stationary.
The sub-object names passed to the Axis object are used to search the appropriate database files for the named objects, but the names these objects are given in the code are not the names used in the files; instead they are given names that are a compound of their class name with the name of the
VERSION 1.00
AXIS: AXIS0
MOTOR: motor0
LOADCELL: loadcell0
ENCODER: encoder0
LENGTH: 293.40527300000000 // re-written length
DEADLENGTH: 453.0 // unchanging length
OFFSET: 0. 0. 0. // Delta in Equation 1
TOP: 0. 5.0 389.0 // top pulley position
END;
AXIS: AXIS1
MOTOR: motor1
LOADCELL: loadcell1
ENCODER: encoder1
LENGTH: 614.24914660000000 // re-written length
DEADLENGTH: 453.0 // unchanging length
OFFSET: 0. 0. 0. // Delta in Equation 1
TOP: 50.0 0. 391.0
BOTTOM: 317.0 0. 89.53 // bottom attachment position
END;
Table 3: Axis database entries for central and side ropes
Axis that they are part of. Thus, the Motor object for axis0 is given the name axisOMotor and so on.
There are two Axis control modes, corresponding to the control modes of the Motor object: STEP_MODE and VELOCITY_MODE. The first of these is used for controlling a single axis: the number of steps the motor should take, and the direction to take them in, is calculated on each poll, and the Motor object is left free to work out how to take them. When the Axis gets to within 1 cm of the desired endpoint it stops recalculating the number of steps and lets the Motor finish the move.
VELOCITY_MODE is used for PolyAxis control. In this case the Axis object employs a fuzzy logic algorithm to estimate the change in velocity a motor should have, and then sets the desired motor velocity accordingly. The Motor object will change its velocity to match the desired velocity the next time it is polled. There are ten fuzzy rules in operation in the current controller:
- If tension is low then increase tension
- If tension is high then decrease tension
- If corrected length is short then increase length
- If corrected length is long then decrease length
- If not near end and relative tension is low then increase relative tension
- If not near end and relative tension is high then decrease relative tension
- If velocity is high then decrease velocity
- If errorLengthVelocity is high then decrease errorLengthVelocity
- If errorTensionVelocity is high then decrease errorTensionVelocity
- If dv is high then decrease dv
Some of the rules may look a little obscure, so some justification for using them is given toward the end of this section. The meanings of the various terms is given by reference to Figure 3. The “corrected length” is the measured length with a correction for the amount of the length residual. The reason for putting this in is so that inaccuracies in the source position that arise from imprecise measurements of rope length or the positions of the bottom attachment points don’t appear as error terms in the control algorithm. The rules themselves are coded in a set of Axis member functions that
have names like isLowT(float tension), which returns the membership of tension in the set LowT. A fuzzy set is defined along an axis (not an Axis!) that represents a physical quantity like tension. The “membership” of a tension value in a fuzzy set is the magnitude of the set at that value. In Boolean logic membership values are always zero, one or undefined (at the transition between zero and one.) Fuzzy sets, unlike Boolean sets, obey the law of noncontradiction everywhere: they vary smoothly between one and zero over some boundary region.
Each rule has a premise (the bit before the “then”) and a conclusion. The truth value of the premise is calculated using various fuzzy operations. The truth value of a simple premise is just the membership of the input value in the set it is associated with (for example, the truth value of the premise tension is low is just the membership of tension in LowT.) For compound premises the individual assertions like tension is low are connected by fuzzy operators such as AND, OR and NOT. There are various ways of defining fuzzy operations, and I’ve chosen ones here that are easy to code and computationally fast. The fuzzy AND operation I’ve represented using multiplication, whereas the usual way is to take the smaller of the two membership values. The fuzzy OR is the larger of the two membership values (which is not used in the rules at the moment) and the fuzzy NOT is just one minus the membership value.
The truth value of the conclusion of a fuzzy rule is usually calculated by truncating the output set at the truth value of the premise. One then does some kind of averaging to “defuzzify” the output and produce the desired control value. In this case, efficiency considerations have led me to put the defuzzification step into the output directly. The conclusion of each rule (the bit after the “then”) in each case is translated into a change in the motor velocity. To decrease the tension, for instance, the motor velocity is increased (positive velocity means the rope is getting longer) and to decrease the length the motor velocity is decreased. The velocity change values are supplied by the functions deltaV(int indicator) and deltaDV(float dv, int indicator) where the flag indicator selects what kind of velocity change you want. Both the magnitude and direction of the velocity change depends on the value of indicator; for instance, to increase tension a velocity change is -0.4 cm/s is returned, and this value is multiplied by the truth value of the premise, then added into the total velocity change. The output of all the rules is just the premise-weighted sum of the velocity changes: this is a very simple defuzzification method that more than makes up in efficiency for what it loses in generality.
The fuzzy rules currently in use are suitable for the prototype manipulator in air with the old source carriage. They may have to be modified for the new source carriage, the full detector geometry and the effects of water.
The high and low tension and length rules should be self-explanatory, but some of the other rules warrant comment. The relative tension rules (that is, the rules that deal with the tension relative to the desired tension set by the code) are relaxed toward the end of the path. This relaxation was added because it was found that errors in the tension measurements resulted in these rules preventing the source from reaching the endpoint in some cases. The rules are only partially relaxed, however; otherwise the side rope tensions tend to get very high for endpoints in the neck (why this is so is not clear.)
There are two damping rules; one based on the rate of change of the length error and one on the rate of change of the tension error. Both sets of rules are necessary, especially to prevent interactions between ropes from driving each other into oscillation. A common scenario is that a side rope finds itself with too much tension and slacks off and the opposite side rope finds itself with too little and tightens up. While the ropes are stable individually, the interaction between them leads to more rapid changes in tension than the rules are designed to compensate for, and so the system oscillates. Adding damping based on the rate of change of tension has eliminated this phenomenon. The damping constant for the length error velocity was calculated by observing the period of the oscillations and treating the system like a free oscillator. The damping constant for the tension error velocity was determined empirically.
The Axis object takes the commands shown in Table 4. When running an Axis from the command line the motor is in STEP_MODE and its acceleration profile is determined by simply decreasing the count-down time of the associated clock channel by a fixed amount every step. This amount is set to ten at the moment, but may be as little as one. Once the maximum speed is reached the motor runs at constant velocity. During the acceleration
phase the number of steps is counted, and when the motor gets to within this number of steps of the endpoint it goes into deceleration, increasing the count-down time by the same increment on every poll. The calibration commands for the loadcell will be described in Section 10: they are just calls to the Loadcell calibration commands in any case.
7 The AV Class
The AV class is still only partially implemented. It is intended to supply the rest of the code with information about the state of the acrylic vessel. To do this is must know the geometry of the vessel and know its position and orientation in the global co-ordinate system. The vessel geometry is described by entries in the database file AV.dat, as shown in Table 5. The AV constructor just takes the name of the vessel to be used as an argument, and searches the AV.dat file until it finds an entry for an AV object with this name.
The NECK_RING_STATIC_POSITION: entry gives the position of the centre of the neck ring when the centre of the vessel is at the origin of the global co-ordinate system and the neck is pointed straight up. The NECK_RING_RADIUS: is the interior radius of the neck ring. The radius of the allowed region of the neck is given by the NECK_RADIUS: entry: this is the radius within the neck that the source is not allowed to move outside of. The VESSEL_RADIUS: is just the radius of the vessel (and here we set x equal to five....)
At the moment the AV object is almost entirely non-functional. The functions it needs are prototyped but don't do anything, mostly because the
Table 5: Acrylic vessel database entry
<table>
<thead>
<tr>
<th>Entry Description</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>NECK_RING_STATIC_POSITION</td>
<td>5.7 0.364.0</td>
</tr>
<tr>
<td>NECK_RING_RADIUS</td>
<td>42.0 cm</td>
</tr>
<tr>
<td>NECK_RADIUS</td>
<td>42.0 cm</td>
</tr>
<tr>
<td>NECK_LENGTH</td>
<td>27.0 cm</td>
</tr>
<tr>
<td>VESSEL_RADIUS</td>
<td>3.20 m</td>
</tr>
</tbody>
</table>
END;
Table 6: Motor database entry
hardware to measure the vessel position has not yet been integrated into the prototype system. The AV object does not take any commands, although a dummy command has been built into it to satisfy the constraints of the Hardware hierarchy, which expect at least one command per class.
Calls to the AV member functions \texttt{vtog(ThreeVector x)} and \texttt{gtov(ThreeVector x)} have been put in the Axis object and the PolyAxis object at the appropriate places. These functions are supposed to take a position vector \texttt{x} and transform it from the vessel to the global co-ordinate system (\texttt{vtog()}) or vice versa (\texttt{gtov()}). Their return value is the transformed vector. The functionality exists in the ThreeVector class to do this transformation, but I've not got around to implementing it here.
8 The Motor Class
Motor objects control stepping motors by setting the period of the clock on an output channel of the TIO10 card. The motor takes a half-step for every clock pulse. Several types of motor may be used in the current system: one type for manipulator control, another for the laser system and a third for the stearable source. Some motor parameters are read in from the MOTOR.dat database file entries to facilitate the use of different motor types. A typical database entry is shown in Table 6.
The database name of a Motor object is not the same as the name it is referred to in the code. The Motor constructor takes two names as inputs: one is the database name and one is the name used in the code. The latter is generally constructed by the object that the Motor object is part of. In the current set up, the Motor for axis0 would be named axis0Motor.
The CHANNEL: is the TIO10 channel for this motor, which is determined by
the wiring of the external hardware. The START_SPEED and CRUISE_SPEED:
are precisely what you would think they are, given in cm/s. 7.0 cm/s is the
maximum allowed cruise speed and 0.07 cm/s is about the minimum allowed
start speed. These limits are enforced by the code.
The Motor object has a number of features that are hardcoded to reflect
the current configuration. The PIA chip on the TIO10 card is used to set bits
that control the direction of a motor and turn all the windings off when the
motor is idle. Two arrays, dirBit[] and awoBit[] contain the bit positions
associated with each output channel. If the outputs are rewired for any
reason these arrays will have to be changed. Note that channels 4 and 5
are reserved for the realtime clock, and this condition is also imposed by the
Motor code. The final hardcoded constant is STEPS_PER_CM which reflects
the mechanical connection between the motor and the rope spool.
As mentioned above, the Motor object has two control modes: STEP_MODE
and VELOCITY_MODE. In STEP_MODE the motor’s acceleration profile is deter-
mined by simply decreasing the count-down time of the associated clock
channel by a fixed amount every step. This amount is set to ten at the mo-
ment, but may be as little as one. Once the maximum speed is reached the
motor runs at constant velocity. During the acceleration phase the number
of steps is counted, and when the motor gets to within this number of steps
of the endpoint it goes into deceleration, increasing the count-down time by
the same increment on every poll.
In VELOCITY_MODE the variable desiredPeriod is assumed to have been
set by a preceding call to setDesiredVelocity() or changeDesiredVelocity(),
both of which set the desired period as well as the desired velocity. The
desired direction is also set by these functions; note that the velocities are
signed quantities, but the the period is always positive. In STEP_MODE the
motor’s direction is never allowed to reverse; in VELOCITY_MODE it is. To
facilitate reversal the Motor object has three possible states: on, off and
standby. The variable onFlag is used to store this state information, with a
value of 0 meaning off, 1 meaning on and 2 meaning standby. This allows
a motor to be slowed to a halt and then restarted (possibly in the opposite
direction) if the Axis control algorithm is not done with it. Note that when
the motor has been turned off by setting onFlag to zero it can’t be restarted
without initiating a new command sequence from the keyboard or Server
object.
As well as onFlag there is a variable called stopFlag which in the past
was used to communicate with the Axis object to allow the Axis object
to stop a motor when the Motor object thought it was done with it. I
don’t think this flag is used anywhere in the code any more, but have kept
The rate at which the motors are polled has an effect on the control algorithm. At the moment they are still being polled a little faster than needed: the interval between polls must be at least LOOP_TIME, which is $10^{-4}$ seconds at the moment. It could probably be increased by a factor of ten without significant loss of performance, and because the current fuzzy rules were developed for a longer polling time (because of the use of non-linear minimization algorithm in PolyAxis to find the source position) they may not work quite so well under the current circumstances. But as they will have to be changed when we go to full scale it is probably worth waiting until then to change them. A related quantity is DVLIMIT, the maximum change in motor velocity allowed per polling cycle. If this is too large the motors may stall. It is currently set at 0.5 cm/s.
The only command the Motor object accepts from the keyboard is maxspeed [speed], which sets the maximum speed in cm/s.
9 The Encoder Class
The Encoder class reads the output of a shaft encoder via custom SNO electronics. The Encoder class itself is a Level 3 object, and it uses a rather messy Level 2 object, the DigitalChannel class, to do actual hardware access. Encoder objects know about the calibration of their encoder, and a hardware address they tell their DigitalChannel object to read values from. A typical entry from the ENCODER.dat database file is shown in Table 7.
Table 7: Encoder database entry
<table>
<thead>
<tr>
<th>Encoder: encoder0</th>
</tr>
</thead>
<tbody>
<tr>
<td>ZEROLENGTH: 747.045410 // rope length when encoder reads zero</td>
</tr>
<tr>
<td>ADDRESS: 168 // encoder board address</td>
</tr>
<tr>
<td>SLOPE: 0.2392 // conversion from ADC value to cm</td>
</tr>
<tr>
<td>DATE: 00-00-0000 // last calibration date (not used)</td>
</tr>
<tr>
<td>END;</td>
</tr>
</tbody>
</table>
The ADDRESS: is the address of the encoder board on the dataConcen-
reset | reset counter to zero
read | read and display value
readloop | enter read/display loop
stoploop | quit read/display loop
setZeroLength [len] | set zero length to len cm
Table 8: Encoder commands
tractor chain. The whole addressing scheme will have to be revised when the dataConcentrator hardware is ready in its final form, so not a lot of attention will be paid to it here. The ZEROLENGTH: of the rope is the total length including the dead length specified in the associated Axis object. Because the Encoder class is at Level 3, it does not know anything about the Loadcell class, which is also at Level 3, or the Axis class, which is above it at Level 4. Therefore all it can know about it the total length of its rope, without any correction for stretch. The ZEROLENGTH: database field is updated once per second when the length is changing and not at all otherwise. The value used to update ZEROLENGTH: is the CURRENT LENGTH. This is because the Encoder constructor resets the counter to zero at startup, and so the old current length becomes the new zero length.
The list of commands the Encoder object will accept from the keyboard is shown in Table 8. The readloop command causes the Encoder to print its value to the screen every time it is polled.
10 The Loadcell Class
The Loadcell object is much like the Encoder object: it is a Level 3 object that accesses hardware via the AnalogChannel object. The and entry in the Loadcell database file LOADCELL.dat is shown in Table 9. Like the Encoder, some of the hardware-access data will have to be modified when the final dataConcentrator hardware becomes available.
The most important feature of the Loadcell object is its calibration facility. Calibration mode is turned on by entering running the “calibrate” command from the keyboard. Subsequent “calibrationPoint” commands are given with various masses hung on the rope. The value of each mass is given on the command line as well. When two or more calibration points have been given, the user enters the “calibrate” command again and the new calibration constants are written to the database file.
Tests have shown the loadcells to be quite linear over most of their range.
VERSION1.00
LOADCELL: loadcell10
MAXLOAD: 50 lb // maximum load in pounds
SLOPE: 0.6777730537 // conversion from ADC value to newtons
OFFSET: -482.535980 // according to N = slope*adc + offset
CHANNEL: 3 // ADC channel setting
DATE: 00-00-000 // unused calibration data
END;
Table 9: Loadcell database entry
However, for very small loads the electronics they are connected to is quite non-linear. For this reason it is necessary to add a 75 k resistor between the red and green leads of each loadcell to provide a small bias that will prevent the amplifiers from being driven into the non-linear regime. The addition of this resistor adds about 0.7 % non-linearity to the overall response of the system, which is an acceptable loss to avoid the region of much higher non-linearity.
There are still some bits of old code hanging about from an earlier incarnation of the code: the setTensionLimits() function is no longer used and may be discarded. Its role has been taken over by the net force limits in the PolyAxis object.
11 Low Level Classes
There are a whole bunch of Level 1 and 2 classes for dealing directly with hardware. I didn’t write most of them, and those I did are going to have to be re-written in fairly short order. Here are a few comments about some of them.
11.1 The DigitalChannel Class
The DigitalChannel class accesses the counters on the dataConcentrator board via a simple PC interface card that will be replaced by the real data-Concentrator interface card soon. It does not deal with the encoder boards out on the axis hardware at all, but just calls in the value from one of the
counter channels on the dataConcentrator card itself. No facility for handling multiple dataConcentrator cards exists at the moment, but one will have to be written when the new hardware arrives.
11.2 The AnalogChannel Class
The AnalogChannel class handles the A/D converter on a dataConcentrator card. Like the DigitalChannel class it has no way of dealing with multiple dataConcentrator boards. When the new hardware arrives these two objects should, at best, be made subobject of a new dataConcentratorBoard class which itself would be a subobject of a dataConcentratorChasis class. At worst, these classes should be scrapped altogether and something more sensible put in their place.
11.3 The TIO10 Class and Its Components
The TIO10 class handles the National Instruments TIO10 card. It implements essentially all of the functionality described in the TIO10 manual. It was written almost entirely by Aksel Hallin. The TIO10 class has subclasses that describe the main chips: the AM9513 timer and its channels, AM9513Channel objects, and the pia class to handle the MC6821 peripheral interface adapter. These Level 1 classes access the card registers directly. The only way they need to be extended is to add some more functions for checking the status registers so that one can tell that the card is actually there and functioning.
Note that the TIO10 class and its subclasses are not part of the Hardware hierarchy. They are not polled and do not accept commands.
12 The Display Class
Display is a simple screen-handling class that is globally accessible to allow all Hardware-derived objects to print stuff to the screen in a semi-organized way. The top level code sets up the Display class to put the cursor on line 19 of the screen, and I try to restrict error messages and other outputs from Hardware-derived classes to the five or six lines below this. The Hardware class hierarchy itself uses the middle part of the screen for error messages, and the very top of of the screen is used by the top level code to display the source position and various error terms, as well as the time.
Messages are sent to the Display object, which is called display, by first printing the string you want to output to the public member display.outString
and then calling the function `display.message(int x, int y)` where `x` and `y` give the location on the screen to start the message. An alternative command is `display.messageB(int x, int y)` which blanks the line first, to eliminate overlap with the tag end of old messages.
13 The Keyboard Class
The Keyboard class takes input from the command line, including handling backspaces and the like. It does not care for arrow keys much. A useful bit of added functionality would be a command history.
14 The Clock Class
The Clock class is a realtime clock that uses channels 4 and 5 of the TIO10 card. A Clock object called RTC for Real Time Clock is defined with global scope at the top level. The time in seconds from program startup is returned as a double precision number by `RTC.time()`.
15 The Server Class
The Server class needs some work. It is supposed to connect the calibration PC with the outside world, and allow a privileged client to send commands to the manipulator control code and make them look like they came from the keyboard. The Keyboard class should probably be called by the Server, rather than the `Hardware::doCommand()` function as is now the case. The real problem with Server is that the communications function are still not working, for reasons that are obscure. A standalone test code in C:motors comm seems to work fine, but the behaviour is not the same when integrated into the main code.
Note that there are bunch of special libraries that have to be set in the makefile to handle the Server class. Some of the definitions in these libraries or their associated header files conflict with similar Borland C++ definitions. For this reason the actual Server object in the main code is defined as a static variable in separate file: servprot.cpp.
Note also that conflicting definitions from the PCTPC stuff produces three link-time warnings during compile.
16 Helper Classes and Functions
There are a whole bunch of helper functions that are part of this system. The most important are the ThreeVector class, which handles all the geometric transformations, and the various database utilities in ioutilit.cpp. All I have time to say is that you should look at the way these helpers are used in the existing code to deal with geometric and I/O problems and see if you can use their functionality rather than rolling your own.
17 Mechanical Considerations
There are whole bunch of features of the mechanics that have not been discussed here. A short list is:
- air cylinders or alternative tensioning devices
- D₂O inventory and control
- role of umbilical in control problem
- the role of the neck in control problem
- lots of others that have slipped my mind just now
We need to keep these problems in mind and worry about them whenever possible. In particular, we need to know how the system behaves in the neck and in the lower half of the vessel. The fuzzy rules may need to be modified to ensure stable control in these regions.
18 Top Level Functionality
The top level code at the moment traps a few commands before they can be passed to the Hardware parser. They are:
quit stop all motors and exit program
log toggle logging state
debug toggle debugging state
Turning on logging opens a file where objects like PolyAxis and Axis can dump state information while they are moving. This is particularly useful in adjusting the fuzzy rules in Axis. There is a standalone routine called logsort.cpp that breaks the log file into axis-specific parts that contain state information and rule outputs. The files have names like a0rules.dat and a0state.dat. There are Origin spreadsheet files called a0.org and the like that allow these data files to be read in and automatically plotted, so that you can see what rules are active when, and what state information is really causing that interesting oscillation.
The debugging toggle just sets or unsets a global variable called debugging that can be used by object in polling functions to decide if they want to dump some data to the screen.
There is also one command line argument that the main code accepts: "forceMap" This generates three files that contain the force vector from each rope on the source for a range of positions. The files are named "axis0.map" and so on. The current force map code is set up for the prototype: hardcoded limits must be changed to map the real detector.
The code sometimes generates floating point exceptions. I think all the sources of this have been found, but just in case a floating point exception handler has been written, and is loaded by a call to signal() in the top level code. When an FPE occurs the exception handler stops all the motors before exiting. Although hardware protection for the motors is planned, one may as well have as many levels of protection as possible.
19 Compiling
There are several settings that have to be changed to compile this code. The file builtins.mak has to be updated according to the specifications in the PCTCP manual. The stack length has to be changed by setting the variable -stklen in the top level code (this generates a compiler warning) and the correct PCTCP libraries have to be linked in the correct order (these libraries and their associated header files generate three compiler warnings.) The large memory model should be used, with Borland C++ source set in the compiler options. I use the default optimization, and turn on all warnings except the one about "functions containing for-loops are not compiled inline." The code itself should generate no warnings. At the moment there is an “unreachable code” warning generated from the Server poll() function because the first line is a return statement, as the code itself does not work.
20 Co-ordinate Systems
There are two co-ordinate systems used in the code: the global system and the acrylic vessel system. The lower attachment points of the side ropes are fixed with respect to the AV, but the AV can move with respect to the global co-ordinate system. The global co-ordinate system is officially defined somewhere, but I don't know what it is precisely. The centre of the PSUP is approximately the origin, and the X-axis is along the electronics corridor, so the Y-axis is approximately north, I think. The Z-axis is local vertical, positive up. All the calculations are done in the global co-ordinate system: the positions of the lower attachment points are transformed before anything is done with them.
21 Things To Be Done
I've tried to indicate things that need to be done as I've gone along. There are a lot of them. The most important things at the moment are as follows:
- get the Server object working and discuss communications with John Wilkerson
- get the air cylinders working
- install and support full dataConcentrator hardware
- get the AVPense boards integrated into the system, supported in software and tested.
- modify the new source carriage to let the weight swing free
- recompile the code with Borland 5.0 and run CodeGuard on it to ensure no memory leaks and other nasties.
- test/modify the fuzzy rules for the neck region and the lower half.
There are probably a lot of other things I've forgotten here as well, but that list appears sufficient to keep a few people busy for a while.
Figure 1: Manipulator Control Classes
Rounded rectangles are classes with global scope
Dotted lines show references
96-03-22
Thomas J. Radcliffe, Queen's.
PolyAxis
AV
Motor
Encoder
DigitalChannel
Loadcell
AnalogChannel
Server
Clock
Display
am9513channel
pia
tio10
2 X am9513
2 X pia
5 X am9513channel
2 X am9513channel
Nominal Dimensions:
- Neck Radius: 75 cm (29.6 in)
- Neck Base Ring Radius: 72.5 cm (28.6 in)
- Neck Length: 703.2 cm (276.9 in)
- Vessel Radius: 600.5 cm (236.4 in)
Note: neck length includes flanges etc., vessel radius is inner radius
Figure 3: Fuzzy Sets for Axis Control
96-03-27 Thomas J. Radcliffe
|
{"Source-Url": "https://sno.phy.queensu.ca/sno/str/SNO-STR-96-026.pdf", "len_cl100k_base": 14448, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 61544, "total-output-tokens": 15725, "length": "2e13", "weborganizer": {"__label__adult": 0.0006437301635742188, "__label__art_design": 0.0008182525634765625, "__label__crime_law": 0.00036454200744628906, "__label__education_jobs": 0.0007410049438476562, "__label__entertainment": 0.00012922286987304688, "__label__fashion_beauty": 0.0003044605255126953, "__label__finance_business": 0.00032782554626464844, "__label__food_dining": 0.0007262229919433594, "__label__games": 0.0012769699096679688, "__label__hardware": 0.03289794921875, "__label__health": 0.0005450248718261719, "__label__history": 0.0004727840423583984, "__label__home_hobbies": 0.0008769035339355469, "__label__industrial": 0.0035839080810546875, "__label__literature": 0.00027441978454589844, "__label__politics": 0.000286102294921875, "__label__religion": 0.0008172988891601562, "__label__science_tech": 0.1041259765625, "__label__social_life": 9.870529174804688e-05, "__label__software": 0.00922393798828125, "__label__software_dev": 0.83837890625, "__label__sports_fitness": 0.0007295608520507812, "__label__transportation": 0.0021724700927734375, "__label__travel": 0.0003616809844970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66604, 0.03069]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66604, 0.7514]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66604, 0.92697]], "google_gemma-3-12b-it_contains_pii": [[0, 1970, false], [1970, 4681, null], [4681, 7263, null], [7263, 10144, null], [10144, 12868, null], [12868, 15463, null], [15463, 18366, null], [18366, 20953, null], [20953, 22765, null], [22765, 25708, null], [25708, 28566, null], [28566, 31506, null], [31506, 33932, null], [33932, 36515, null], [36515, 37094, null], [37094, 39221, null], [39221, 42002, null], [42002, 44203, null], [44203, 46173, null], [46173, 47971, null], [47971, 50798, null], [50798, 52623, null], [52623, 54825, null], [54825, 56439, null], [56439, 58696, null], [58696, 60594, null], [60594, 61911, null], [61911, 64427, null], [64427, 65962, null], [65962, 66298, null], [66298, 66537, null], [66537, 66604, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1970, true], [1970, 4681, null], [4681, 7263, null], [7263, 10144, null], [10144, 12868, null], [12868, 15463, null], [15463, 18366, null], [18366, 20953, null], [20953, 22765, null], [22765, 25708, null], [25708, 28566, null], [28566, 31506, null], [31506, 33932, null], [33932, 36515, null], [36515, 37094, null], [37094, 39221, null], [39221, 42002, null], [42002, 44203, null], [44203, 46173, null], [46173, 47971, null], [47971, 50798, null], [50798, 52623, null], [52623, 54825, null], [54825, 56439, null], [56439, 58696, null], [58696, 60594, null], [60594, 61911, null], [61911, 64427, null], [64427, 65962, null], [65962, 66298, null], [66298, 66537, null], [66537, 66604, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66604, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66604, null]], "pdf_page_numbers": [[0, 1970, 1], [1970, 4681, 2], [4681, 7263, 3], [7263, 10144, 4], [10144, 12868, 5], [12868, 15463, 6], [15463, 18366, 7], [18366, 20953, 8], [20953, 22765, 9], [22765, 25708, 10], [25708, 28566, 11], [28566, 31506, 12], [31506, 33932, 13], [33932, 36515, 14], [36515, 37094, 15], [37094, 39221, 16], [39221, 42002, 17], [42002, 44203, 18], [44203, 46173, 19], [46173, 47971, 20], [47971, 50798, 21], [50798, 52623, 22], [52623, 54825, 23], [54825, 56439, 24], [56439, 58696, 25], [58696, 60594, 26], [60594, 61911, 27], [61911, 64427, 28], [64427, 65962, 29], [65962, 66298, 30], [66298, 66537, 31], [66537, 66604, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66604, 0.05928]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
c7c132f0c4bb26cb2f105634bbb39b414b822a11
|
ABSTRACT
Memory consistency specifications (MCSs) are a difficult, yet critical, part of a concurrent programming framework. Existing MCS testing tools are not immediately accessible, and thus, have only been applied to a limited number of devices. However, in the post-Dennard scaling landscape, there has been an explosion of new architectures and frameworks. Studying the shared memory behaviors of these new platforms is important to understand their behavior and ensure conformance to framework specifications.
In this paper, we present GPUHarbor, a wide-scale GPU MCS testing tool with a web interface and an Android app. Using GPUHarbor, we deployed a testing campaign that checks conformance and characterizes weak behaviors. We advertised GPUHarbor on forums and social media, allowing us to collect testing data from 106 devices, spanning seven vendors. In terms of devices tested, this constitutes the largest study on weak memory behaviors by at least 25 times more weak behavior occurrence rates (e.g., AMD GPUs show 25.3x more weak behaviors on average than Intel). We conclude with a discussion of the impact our results have on software development for these performance-critical devices.
CCS CONCEPTS
• Software and its engineering → Empirical software validation; • Computing methodologies → Parallel programming languages; • Graphics processors.
KEYWORDS
memory consistency, GPUs, mutation testing
1 INTRODUCTION
The end of Dennard Scaling has brought about an explosion of multiprocessor architectures that improve application performance through large-scale parallelism. Graphics Processing Units (GPUs) exemplify this trend and are now integral components of many systems, from smartphones to large HPC supercomputers. While GPUs were previously primarily used for graphics applications, they now have applications in a variety of areas including machine learning [42] and particle simulations used in drug development [38]. GPUs are even being used for security and safety-critical applications such as encryption [37] and self-driving cars [13], making safety and correctness an increasing concern on these devices.
Because GPUs are produced by several vendors (NVIDIA, AMD, Intel, etc.) and evolve rapidly, many different devices are currently deployed. These devices vary both in their performance, as well as their functional behavior. To account for this, the community has developed portable GPU programming frameworks, such as Vulkan [22] and WebGPU [51], as unified abstractions to target these diverse devices.
Memory consistency specifications (MCSs), which define the semantics of shared memory operations, are an important part of these abstractions. While MCSs provide many guarantees, such as atomicity and coherence, they often allow an architecture to implement weak memory behaviors to improve efficiency [34]. For example, x86’s relaxed MCS [44] allows store buffering behaviors, in which a processor may buffer the stored values before flushing them to a shared memory location; as a result, another processor may observe the buffered store occurring out-of-order.
Because relaxed MCSs can be complex and nuanced, there is a history of platforms (compilers and architectures) containing MCS conformance bugs [1, 4, 25, 30, 31]. That is, the MCS provides a guarantee that the implementation does not honor. Due to the non-determinism of concurrency, MCS bugs may occur extremely rarely, or only when provoked, e.g., by side-channel stress [46]. Apart from conformance, a device’s weak behavior profile, i.e., the frequency at which allowed weak behaviors occur and how system stress influences this frequency, is also a useful metric. For example, this can be useful in developing conformance testing strategies [28] and enables developers to reason about tradeoffs between accuracy and performance in approximate computing that judiciously elides synchronization [35, 41, 43].
Unfortunately, previous GPU testing work had limited scope, testing only a small number of devices [25, 28], with the largest study testing eight devices [1]. These approaches did not scale due to the difficulty of portable GPU application development and deployment, e.g., while frameworks like OpenCL [21] are portable in theory, there are many difficulties in practice [47]. Consequently, little is known about the MCS conformance and weak behavior profiles at large. This is especially problematic as portable GPU frameworks depend upon many layers and environments (e.g., architectures, compilers, runtimes, operating systems, etc.); it is difficult to extrapolate insights from a small number of platforms tested in controlled environments to the diverse universe of deployed GPUs.
GPUHarbor. In this paper, we present a large-scale study of GPU MCS testing, which, to the best of our knowledge, tests 10x more devices than previous studies. Figure 1 summarizes our study, including the number of GPUs that we tested (106), broken down by two frameworks (WebGPU and Vulkan) and seven vendors (Intel, Apple, NVIDIA, AMD, Arm, Qualcomm, and Imagination). This scale is empowered by GPUHarbor, a new cross-platform GPU MCS testing tool suite. GPUHarbor includes two front-ends, a browser web app (using WebGPU) and an Android app (using Vulkan). We advertised our web app on campus forums and social media to obtain a significant number of WebGPU results. We test much fewer Vulkan devices as our Android app is not yet widely accessible on the Google Play Store, but in Sec. 7 we discuss how we will enable larger mobile studies on both Android and iOS.
GPUHarbor uses litmus tests, small concurrent programs that check for load/store reordering corresponding to weak memory behaviors. Current GPU MCS testing tools execute litmus tests many times in succession to check conformance and characterize devices [1, 25, 46]. However, these prior approaches have several shortcomings: (1) they are implemented in vendor-specific languages, e.g., CUDA; (2) they require expert users to build, configure, and execute tests on each device, e.g., as is the case for OpenCL; or (3) litmus tests were embedded in vendor-specific stress testing environments and thus, would not execute efficiently on other devices. This cumbersome litmus testing workflow made it infeasible to perform a large-scale study. In contrast, GPUHarbor defines litmus tests using a neutral configuration (written in JSON), which it compiles to a portable shading language (WGSL [50] or SPIR-V [20]). The resulting litmus testing application then tunes the testing stress automatically. The net result is a fully automated and easy-to-use tool for GPU MCS testing at large. Table 1 shows how many weak memory litmus test iterations were run and how many weak behaviors were observed in our study.
We perform the following two investigations on our data set: (1) we examine the results of MCS conformance tests and find two new bugs in mobile device GPUs from Arm and NVIDIA, and (2) we characterize weak memory behavior profiles, e.g., the rates at which allowed weak behaviors occur and how system stress influences this frequency, is also a useful metric. For example, this can be useful in developing conformance testing strategies [28] and enables developers to reason about tradeoffs between accuracy and performance in approximate computing that judiciously elides synchronization [35, 41, 43].
Contributions. In summary, our contributions are:
1. Tooling: We introduce GPUHarbor, a new cross-platform GPU MCS testing tool with accessible web and Android interfaces (Sec. 3).
2. GPU MCS Weak Behavior Characterization: We conduct a large GPU weak memory characterization and conformance testing study, collecting data from 106 GPUs (Sec. 4).
3. Conformance Testing and Analysis: (a) We discover two unreported bugs in Arm and NVIDIA devices (Sec. 5.1). (b) We analyze statistical similarities across GPUs and describe the impact on testing strategies and device fingerprinting (Sec. 5.2). (c) We discuss how weak behavior profiles impact the development and testing of synchronization algorithms on GPUs (Sec. 5.3).
4. Lessons Learned: We detail the lessons learned while designing and running this study, providing a guide to other researchers seeking to implement similar large experimental explorations (Sec. 6).
All of the data we collected as part of our study, and the tools used to do so, are available as part of our artifact [27]. In addition, GPUHarbor’s web interface is hosted by UC Santa Cruz and can be found at https://gpuharbor.ucsc.edu/webgpu-mem-testing/.
2 BACKGROUND
In this section we provide an overview of memory consistency specifications (Sec. 2.1), define the litmus tests we run and how they allow reasoning about relaxed memory models (Sec. 2.2), and introduce GPU programming concepts from the WebGPU and Vulkan GPU frameworks, including descriptions of their MCSs (Sec. 2.3).
2.1 Memory Consistency Specifications
Today, the memory consistency specifications for architectures, e.g., x86 [44], and languages, e.g., C++ [7], are formalized using mathematical logic. This formalism represents shared memory program executions as a set of memory operations, e.g., reads, writes, and read-modify-writes, and relations between these events,
e.g., happens-before (hb). Allowed executions constrain define constraints on some of these relations, e.g., hb is required to be acyclic.
The strongest MCS is sequential consistency (SC) [26], which states that concurrent program executions must correspond to a total hb order such that the order respects the per-thread program order, allowing events from multiple threads to be interleaved. In relaxed MCSs, the hb relation is a partial order, allowing various weak behaviors (i.e. executions that are not SC) if shared memory operations on multiple threads are not synchronized.
There is a large body of work focused on formalizing MCSs, including a model for Vulkan’s [19]. WebGPU generally follows the Vulkan MCS, with prior work [28] formalizing portions of its MCS necessary for reasoning about simple litmus tests. However, for this work it is not necessary to understand the full formalization of the WebGPU and Vulkan MCSs, so we describe the necessary subset of the specification briefly and informally. In addition, we follow prior work on MCS testing [25, 28] and consider only trivially data-race-free programs where all operations are atomic, as our intention is not to test the behavior of programs with undefined semantics (caused by data races).
Our Target MCS. Because Vulkan is one of several backends to WebGPU, the MCS for WebGPU is a subset of the MCS for Vulkan. In order to provide a unified study across both frameworks, we target only the WebGPU MCS, which we then map to its Vulkan counterpart. The WebGPU MCS provides very little inter-workgroup synchronization due to the diversity of backends it targets, with the weakest backend being Apple’s Metal [6], which provides only relaxed atomics operations. These operations, which come from the C++ memory model [7], compile to plain loads/stores at the architectural level, but at the language level provide few synchronization guarantees between threads.
The one inter-workgroup MCS property provided by WebGPU atomics is coherence, which states that memory accesses to a single location must respect sequential consistency; sometimes called SC-per-loc [4]. However, memory accesses to disjoint addresses are allowed to be reordered. Mapping these WebGPU atomics to Vulkan is straightforward; all WebGPU atomic accesses are simply mapped to SPIR-V atomic accesses with a relaxed memory order. While our testing campaign considers only relaxed memory accesses, Vulkan allows additional memory orders; specifically, acquire and release. While the precise semantics of these memory orders is complex, especially when combined with other relaxed atomics, we note that they are required to implement the required synchronization in many common concurrency constructs, such as a mutex. 1 The lock() method needs to execute an acquire atomic operation when the mutex is obtained and the unlock() method requires executing a release atomic operation. If a mutex is implemented without these memory orders, it is possible to violate mutual exclusion, as we show in Sec. 5.3.
2.2 Litmus Tests
Litmus tests are small concurrent programs that illustrate [45], compare [29, 48], and empirically test [1, 3, 25] MCSs. These tests contain a condition on the final state of local variables and memory
---
1WebGPU does not provide inter-workgroup acquire and release memory orders, so it is not currently possible to implement a well-specified mutex in WebGPU.
values that checks for weak behaviors. For example, the program in Fig. 1a is known as the message passing (MP) litmus test, in which one thread writes to a memory location y (□) followed by a write to y (◆), while a second thread reads from y (◇) and then x (◇). As mentioned earlier, in this work, we assume that all of the memory operations in a litmus test are atomic, which in languages that follow the C11 style MCS [7] ensures that the semantics of shared memory operations are well-defined. Additionally, unless explicitly noted otherwise, we consider these atomic operations to have a relaxed memory order, which allows compilers and hardware to aggressively optimize their execution.
The condition underneath the test shows an outcome that only occurs in relaxed executions. In this case, the behavior corresponds to an execution where the read of y returns 1 but the read of x returns 0. While some relaxed MCSs do not allow this behavior, e.g., the x86 MCS [44], many other relaxed MCSs, especially ones for languages like C++ [7], do allow the behavior. As mentioned earlier, our target WebGPU MCS does not provide any guarantees outside of coherence, and thus the two memory accesses per thread (which target disjoint addresses) are allowed to be reordered. In cases where the weak behavior is allowed (both by the MCS and the implementation), the rate at which this behavior is observed on real systems is highly dependent on system stress. Early GPU application development work did not observe any weak behaviors, despite specifications allowing them [12]. However, later work added specialized system stress around the test execution and revealed many cases of surprising weak behaviors [1, 46].
Executing litmus tests on deployed systems can be used for two purposes, which we will illustrate using a litmus test L that can exhibit a weak behavior execution e, and an MCS S.
(1) Conformance testing: if e is disallowed in S then we can check implementations of S. That is, if a platform p claims to implement S, then we can execute L many times on p, checking for e. The observation of e would indicate a bug.
(2) Profiling weak behaviors: if e is allowed on S, and a platform p claims to implement S, then we can execute L many times on p to understand the extent to which that platform allows e. In some cases, p might not show e empirically, or
maybe e appears more frequently under a certain configuration of system stress. A collection of this type of data creates a weak memory profile for p.
Prior work [28] has utilized weak memory profiles in highly tuned conformance testing. In that work, it was shown that allowed MP executions could be used to tune system stress for disallowed behaviors in associated conformance tests. For example, the MP-CO litmus test, shown in Fig. 1b, is similar to MP, except that every memory access targets the same memory address and different values are stored (required to identify a weak behavior). Given that there is only one address used in MP-CO, the weak behavior in this test is disallowed under coherence, and thus in the WebGPU MCS. If certain system stress reveals weak behaviors in the allowed MP litmus test, then, in the case where a platform contains a bug, it is likely to reveal the buggy behavior in the MP-CO conformance test. In Sec. 3.1 we show the litmus tests used in our experimental campaign, and in Sec. 5.1 we illustrate the effectiveness of the approach of prior work [28] by describing two new bugs.
2.3 GPU Programming
This study targets two cross-platform GPU frameworks, Vulkan and WebGPU. Vulkan is a modern graphics and compute API that can be run on many Linux, Android, and Windows devices, and can target Apple devices through the MoltenVK [23] portability layer. WebGPU is designed to run in browser environments and is compiled to different backends depending on the operating system of the device (Direct3D [33] on Windows, Vulkan on Linux/Android, and Metal [6] on Apple devices).
Both Vulkan and WebGPU define their own programming languages, called SPIR-V and WGSL respectively. Programs written in these languages are called shaders and run on the GPU, while the APIs used to allocate memory on the GPU and dispatch shaders are written in the language of the host device, commonly C++ for Vulkan and JavaScript for WebGPU. In this work, we discuss the complexities of writing tools that must be implemented in different languages and how future development (Sec. 7) could ease the difficulty of cross-platform GPU MCS testing.
**GPU Execution Model.** GPUs run thousands of concurrent threads (invocations in Vulkan and WebGPU) organized hierarchically and executed in a single-instruction, multiple-thread (SIMT) format. To support this execution model, in WGSL and SPIR-V threads are partitioned into discrete workgroups, with built-in identifiers used to query a thread’s workgroup id. Workgroups are limited in size (e.g. 1024 in CUDA, with limits varying depending on the device in WGSL/SPIR-V) and have access to an efficient shared memory region. A group of threads organized into workgroups and running on the device is called a grid, with the number of threads per workgroup and the number of workgroups specified at dispatch time. All threads in the same dispatch have access to a global memory region.
While our target MCS was discussed in the previous section, we note that GPU atomic operations can be annotated with a memory scope. Two common scopes in Vulkan and WebGPU are workgroup, which specifies that synchronization occurs only between threads in the same workgroup, and device, which specifies that synchronization occurs across all threads executing on the device. Threads within workgroups generally have access to efficient primitive barrier operations, e.g., workgroupBarrier in WebGPU. However, highly optimized implementations of important parallel routines (e.g. inter-workgroup prefix scans [32]) rely on fine-grained inter-workgroup communication. Thus, like prior work [25, 28], we see a more imminent need for testing MCS properties at the inter-workgroup level; which we keep as our sole scope for this work. Similarly, GPU programs have several different memory types, e.g. whether it is shared-workgroup memory or device-wide memory. Given that we consider only inter-workgroup interactions, we only consider device-wide memory.
3 SYSTEM OVERVIEW
Building on approaches in prior work [28], we discuss our testing campaign (Sec. 3.1) and the development of our MCS testing tools that are easily accessible on a wide range of devices, summarized in Fig. 2. We overview each stage of the tooling, starting with litmus test generation (Sec. 3.2), moving on to the design of GPUHarbor’s web interface and Android app (Sec. 3.3). We end the section by describing our data collection process (Sec. 3.4).
3.1 Litmus Test Selection
The tests we utilize in our study build off of the MCS mutation testing strategy used in [28]. We use 32 mutants, out of which 24
are litmus tests with weak behaviors allowed by the WebGPU MCS. The mutants are used to find effective system stress to then run the conformance tests. Our results analysis focuses on characterizing the rates of weak behaviors of six of the mutants, one of which is MP (Fig. 1a), with the other five shown in Fig. 3. These tests enumerate all the combinations of four instructions on two threads that can lead to weak behaviors. Thus, they capture testing for all pair-wise memory reorderings. For example, the SB test checks for store-load reorderings, while the LB test checks for load-store reorderings. Additionally, these tests capture synchronization patterns used in common concurrency algorithms like a compare-and-swap spinlock. Because of this, prior work has also focused on these tests and has shown their utility in finding bugs in both applications and MCS implementations.
Once the mutants are run, we use the weak behavior profile of a device to determine an effective system stress configuration to run conformance tests under. We utilize the 20 conformance tests from [28]. As a concrete illustration using one mutant and conformance test, we would run the MP test under many different system stress configurations to build a weak behavior profile. We then use the most effective configuration at revealing MP weak behaviors to run a closely related conformance test, e.g., MP-CO (Fig. 1b). This approach was shown to be effective at finding bugs in prior work [28] and we further show its effectiveness by discovering two new bugs: a violation of MP-CO on Arm devices and a violation of MP-CO on an NVIDIA device (see Sec. 5.1).
### 3.2 Litmus Test Generation
We now discuss our tooling that generates and runs our testing and characterization campaign. Litmus test behaviors are non-deterministic and sensitive to system stress. Due to this, the shaders that run the litmus tests contain not only the actual litmus test instructions, like those in Fig. 3, but take in a number of parameters and provide functions that are used to construct system stress.
To provide a standardized interface for defining litmus tests in different GPU languages, we built a tool, Litmus Generator (LitGen, in Fig. 2), which is similar to previous litmus testing tools [3] but is specifically targeted to create GPU programs with system stress, as was shown is necessary for testing GPU MCSs [1, 25, 28]. LitGen takes litmus tests that are written in an abstract format, currently JSON, that specify the actions of the test (e.g. loads and stores) and the possible behaviors of the test, with a special designation being given to weak behaviors. The tests used in this work were all manually specified, as they are relatively small, but LitGen could be integrated with other tools that use formal models to generate litmus tests, e.g., [2, 48], which would provide more automation and account for more complicated tests and MCSs. LitGen outputs a test shader, which runs the test alongside system stress developed in prior work [25, 28], and a result shader, which aggregates the observed behaviors of the test.
The result shader is generated separately from the test shader for several reasons:
1. Some tests, like 2+2W (Fig. 3e), examine memory locations for weak behaviors after all threads have finished executing the test. To avoid relying on synchronization features (some of which we are trying to test), we instead pass the test memory buffer into a new result aggregation shader, which executes after the test shader.
2. LitGen implements parallel testing, described in [28], which runs thousands of instances of each litmus test concurrently. Thus, it is natural to leverage the inherent parallelism of the GPU to also aggregate the many results, which otherwise may be time-consuming to do on the CPU, especially since it requires copying memory from the GPU to the CPU.
Currently, two backends exist for LitGen. The tool outputs WGSL shaders directly, as WGSL is a text-based language. SPIR-V, on the other hand, is a low-level representation similar to LLVM, increasing its flexibility but making code generation more complex. Therefore, for Vulkan backends LitGen first outputs OpenCL, a compute-focused GPU language, which is similar in syntax to C++. Then, it utilizes Clspv [15] (2), a prototype compiler from OpenCL to SPIR-V, to generate the shader used in the Android app.
As WebGPU is primarily browser-based while Vulkan runs on native devices, we currently maintain litmus testing driver programs
---
**Figure 3:** These litmus tests, along with MP from Fig. 1a, represent six classic weak behaviors allowed by relaxed MCSs. S and L signify a relaxed atomic store and load, respectively.
that control system stress. When tuning and conforming, a set of tests are chosen to run with multiple random system stress configurations, searching for configurations that maximize the rate of weak behaviors and uncover bugs in MCS implementations. While this study includes the largest collection of data on mobile GPU MCS behaviors, in Sec. 7 we discuss future work that could increase the reach of mobile GPU MCS testing even further.
Exploring. Figure 4 shows a screenshot of GPUHarbor’s web interface explore page for the MP litmus test after the test has been run with relatively high systems stress on a MacBook Pro with an integrated Intel Iris GPU. The top of the page includes a description of the test and pseudocode showing the test instructions. The right-hand side includes an editable list of the parameters that define system stress, along with several presets. When the test is running, the histogram updates in real-time with the number of times each behavior is observed. The progress bar gives an estimate of how much longer is left to run, based on the speed of previous iterations.
The green bars correspond to sequential behaviors, where one thread runs entirely before the other. The blue bar corresponds to interleaved behaviors, where actions from each thread are interleaved (e.g., leading to the behavior `r0 == 0 && r1 == 1` in the MP litmus test). The red bar corresponds to weak behaviors; in this run, three MP weak behaviors were observed out of over 13 million test instances, so the histogram shows behaviors (using a log scale, as weak behaviors are relatively rare).
Tuning and Conforming. Both the web interface and the Android app can be used to tune system stress, as in [28]. When tuning, a set of tests can be selected, with presets available for weak memory tests (e.g., those in Fig. 3) and conformance tests, e.g., to test coherence. Testing options like the number of configurations, the maximum number of workgroups, and other parameter overrides can be modified to run different experiments and check specific tests without redeploying any code.
To collect data from volunteer users across a diverse set of devices, we strive to minimize the options users have to configure. This reduces the chances of errors and provides us with a standardized dataset to analyze. The web interface’s tuning page, therefore, includes a tab that exposes no configuration options, but instead shows only a few buttons: one button that starts a combined tuning/conformance run with default parameters; and another button that pulls up a submission form, which submits the results along with some (optional) contact information. Our results are all anonymized; contact details were only collected if users wanted to be informed about the outcome of the study. Before submitting, users agreed that their anonymized results could be aggregated, reported on, and released as part of this study.
3.4 Data Collection
To submit test data, the web interface communicates with a backend service that exposes an API for submitting results and inserting them into an SQLite database (in Fig. 2). The data is then analyzed using Python scripts. The Android app is not yet available on the app store nor is it integrated with the SQLite backend, so results are manually copied off of the device for analysis. In Sec. 7 we
discuss how we can reduce the friction for submitting mobile app results, and thus, increase the reach of future studies. Nevertheless, our study of eight devices is the largest testing campaign of mobile GPU MCS behaviors of which we are aware.
While system stress configurations are generated randomly, we would like to ensure that the configurations run on different devices are the same for data analysis purposes. That is, if different GPUs are tested with the same stress configurations, we can compare how the different devices behaved under the same stress. We ensure this by integrating a seedable Park-Miller random number generator [39] into both the web interface and the Android app and using the same seed when running all of our tuning experiments.
By default, browsers only expose limited information about the user’s GPU without turning on vendor-specific development flags due to privacy and security concerns around fingerprinting [52]. In order to have as much information as possible about our data, we included instructions asking users to temporarily enable flags so we could collect detailed GPU information. Of the 98 results we collected, 67 included the exact GPU tested. The other 31 results did not specify the exact GPU, but included only the vendor and a string describing the GPU architecture, such as “intel gen-9” or “nvidia ampere”. All Apple devices reported an architecture of “common-3”, making it impossible to immediately distinguish M1’s vs M2’s. However, we show in Sec. 5.2 that our data can be used to infer device information, hindering the ability of browsers to hide the specifics of a user’s GPU.
4 INITIAL RESULTS: WEAK BEHAVIOR CHARACTERIZATION
To collect data from as many sources as possible, we disseminated the link to GPUHarbor’s web interface to the general public, utilizing campus forums and social media, and ran the Android app on eight devices that we could physically access. As shown in Tab. 1, we collected data from millions of tests; each test used a randomly generated system stress configuration (we used 50 configurations on the web interface and 150 on the Android app). In each configuration, tests were run millions of times based on a randomly generated number of workgroups and threads per workgroup.
To ensure data integrity, we implemented a checksum algorithm that verified we saw the expected number of overall behaviors based on the system stress configuration. The testing duration was also recorded, however, we ran into one issue here. Some computers went to sleep in the middle of the tests, suspending the browser’s process and leading to extremely long recorded test times. To overcome this, we recorded testing time on a per test/configuration basis; we then filtered the results so as to not include any test/configuration durations over one minute. We note that each individual test runs quickly (e.g. in less than 5 seconds), thus, runs that were over one minute were most likely when the computer went to sleep. To approximate the length of the test that was suspended, we used a neighboring test’s time.
One consideration for collecting data from the wider public is that we cannot afford to run tests for hours at a time. Previous work targeted only a few devices, running tests on one device for a minimum of 36 hours [25] or 2 hours [28]. However, asking volunteer users to leave their browsers and computer open for that long is impractical and would certainly decrease the number of submissions. Therefore, we heuristically chose the number of test environments and iterations per environment, aiming for the tests to finish in 10-20 minutes.
Figure 5 shows the distribution of testing time on our web interface, broken down by vendor. The results show that NVIDIA devices were the fastest on average, mostly running all tests in under 15 minutes. On the other hand, Intel devices ran slower, with two older Intel GPUs taking over an hour and a half to complete.
In the rest of this section, we analyze our WebGPU and Vulkan data to characterize the rates at which weak behaviors occur on devices from different vendors. These initial results motivate three research questions, which are explored in depth in Sec. 5:
1. Do MCS bugs exist in the wild, especially in GPUs which are relatively untested (Sec. 5.1)?
2. Can our characterization data be used to identify similarities between GPUs (Sec. 5.2)? If so, then our data can be used to develop new testing strategies or to expose potential new browser fingerprinting vulnerabilities.
3. How can a weak behavior characterization study be used in programming guides for implementing synchronization constructs, e.g. mutexes (Sec. 5.3)?


4.1 Weak Behaviors in WebGPU
Figure 6 shows the average rates of observed weak behaviors for the six litmus tests of Fig. 3 (plus MP) in the test environment that maximizes the rate on each device broken down by test and vendor. As described in Fig. 1, we have data from at least 15 devices from each vendor. The overall testing time across all 98 devices was 31.1 hours, an average of 19 minutes per device.
Devices from all vendors showed weak behaviors on each litmus test. In all but two cases, observing weak behaviors was all or nothing: if a device revealed weak behaviors on one litmus test, it revealed weak behaviors on all of them. In contrast, on a device implementing x86’s TSO MCS, we would expect to only see store buffering behaviors. However, unlike x86, GPU devices do not provide low-level details, such as the hardware-level MCS, thus it was not clear what types of weak behaviors we would observe. These results show that many GPUs implement very relaxed memory models, in contrast to stronger CPU architectures like x86 TSO.
Intel devices tended to have the lowest rate of weak behaviors, with just over half of them (15/26) revealing weak behaviors on each test. The median rate of weak behaviors on Intel devices was even lower than their average, around 0.02% for each test. No Intel device showed a rate of weak behaviors above 1% on any test.
NVIDIA devices revealed weak behaviors at a relatively low rate. Our results include results from NVIDIA’s Kepler (2012), Maxwell (2014), Pascal (2016), Turing (2018), and Ampere (2020) architectures, with a majority being the more recent Ampere. Older devices generally showed fewer weak behaviors, with the minimum on each of the six tests being Kepler and Maxwell devices. However, one outlier is that the maximum rate of SB behaviors (7.73%) was seen on a Kepler device. Interestingly, that device was also the only device not to observe any weak behaviors on S, LB, and 2+2W. The only other device not to reveal weak behaviors on a test was a Quadro K620 with a Maxwell architecture, on MP, R, and SB.
Apple devices were consistently weak, revealing weak behaviors on every device and test, generally at a higher rate on all tests than NVIDIA devices but with less variation than AMD devices. Apple GPUs have only been recently built into non-mobile devices, so these results represent the first comprehensive evaluation of the weak behaviors on Apple GPUs. We don’t have the specific name of every Apple device, but we were able to collect enough information to show we had results from Apple M1 (basic, Pro, Max) and Apple M2 (basic, Pro) devices.
AMD devices were also very weak, with 100% of devices showing weak behaviors on every test. The clear highest average rate occurs on the SB litmus test on AMD GPUs. Most of the AMD devices show a high rate of weak behaviors on SB, approaching 10% and higher, but devices with AMD’s Graphics Core Next 5 micro-architecture all showed rates under 1%. This means that even from a single vendor, the behaviors of different architectures can vary widely and past results from one vendor cannot be counted on to predict future behaviors.
4.2 Weak Behaviors in Vulkan
The data in Tab. 2 shows the percentage of weak behaviors in the test environment that maximizes the rate at which they occur for our Android devices. In contrast to our web GPUs, in the mobile setting, weak behaviors were observed in every test on only one device, the NVIDIA Tegra X1, but the rates on this device were very low, beneath 0.1%. The most difficult test to observe in general was R, which checks whether a store is reordered with a following load on one thread. We did not observe any weak behaviors on the Imagination GPU; because testing is fundamentally incomplete, this could mean that the device implements a strong MCS, or that our testing approach was not effective. Interestingly, ARM only showed weak behaviors in the MP test.
We observe that, in general, the rates of weak behaviors increase as devices become more powerful. This is especially apparent from the four Qualcomm devices we test, as the rate of weak behaviors increases from 0% on the Adreno 610 (which has 96 shading units, analogous to NVIDIA’s CUDA cores) up to a maximum of 14.37% in SB on the Adreno 660 (with 512 shading units). One intuitive explanation for this might be that smaller GPUs lack the ability to schedule as many threads at once, naturally reducing the rates of weak behaviors despite architectures that might allow them. We see a similar trend on the Arm GPUs, where the smaller Mali-G71 (32 shading units) showed a lower rate of weak behaviors than the larger Mali-G78 (384 shading units).
5 INSIGHTS AND IMPACTS
We now set out to answer the three questions posed in Sec. 4 using our data and characterization of weak behavior rates.
5.1 MCS Bugs
Our conformance testing campaigns discovered bugs on several vendors’ devices when running under the Vulkan and WebGPU frameworks.
(1) ARM: We observed coherency violations of the MP-CO litmus test when using the Vulkan framework on two Arm GPUs, a Mali-G71 and a Mali-G78. These bugs were reported to and confirmed by Arm, leading to a compiler fix to insert a missing memory fence. Arm has also added regression tests based on the pattern of the violation we reported.
(2) NVIDIA: We also observed violations of the MP-CO test when run using the Vulkan framework on an NVIDIA Tegra X1. Additionally, our WebGPU conformance test results revealed violations of a different coherence test, RR, on an NVIDIA Quadro P620 running on a Linux desktop (therefore using Vulkan as the native framework). The combined
<table>
<thead>
<tr>
<th>Vendor</th>
<th>Device</th>
<th>MP</th>
<th>LB</th>
<th>SB</th>
<th>S</th>
<th>R</th>
<th>2+2W</th>
</tr>
</thead>
<tbody>
<tr>
<td>Qualcomm</td>
<td>Adreno 610</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
</tr>
<tr>
<td></td>
<td>Adreno 440</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>1.45%</td>
<td>1.65%</td>
<td>0%</td>
</tr>
<tr>
<td></td>
<td>Adreno 440L</td>
<td>0.04%</td>
<td>2.75%</td>
<td>5.81%</td>
<td>3.81%</td>
<td>0%</td>
<td>6.38%</td>
</tr>
<tr>
<td></td>
<td>Adreno 660</td>
<td>0.12%</td>
<td>8.5%</td>
<td>14.37%</td>
<td>5.49%</td>
<td>0%</td>
<td>11.5%</td>
</tr>
<tr>
<td>Arm</td>
<td>Mali-G71</td>
<td>0.04%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
</tr>
<tr>
<td></td>
<td>Mali-G78</td>
<td>1.56%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
</tr>
<tr>
<td>Imagination</td>
<td>PowerVR GE8320</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
<td>0%</td>
</tr>
<tr>
<td>NVIDIA</td>
<td>Tegra X1</td>
<td>0.01%</td>
<td>0.05%</td>
<td>0.02%</td>
<td>0.01%</td>
<td>0.01%</td>
<td>0.05%</td>
</tr>
</tbody>
</table>
Table 3: Each row shows the cosine similarity statistics between all pairs of devices from that vendor. The last row shows the similarity statistics across all pairs of devices.
<table>
<thead>
<tr>
<th>Vendor</th>
<th>Avg</th>
<th>Median</th>
<th>Min</th>
<th>Max</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intel</td>
<td>0.870</td>
<td>0.891</td>
<td>0.683</td>
<td>0.985</td>
</tr>
<tr>
<td>Apple</td>
<td>0.903</td>
<td>0.913</td>
<td>0.699</td>
<td>0.993</td>
</tr>
<tr>
<td>NVIDIA</td>
<td>0.903</td>
<td>0.931</td>
<td>0.670</td>
<td>0.996</td>
</tr>
<tr>
<td>AMD</td>
<td>0.904</td>
<td>0.927</td>
<td>0.661</td>
<td>0.989</td>
</tr>
<tr>
<td>All</td>
<td>0.840</td>
<td>0.862</td>
<td>0.477</td>
<td>0.996</td>
</tr>
</tbody>
</table>
The sequential behaviors gives the data a degree of freedom, which is necessary for calculating a valid similarity measure.
For our similarity metric, we choose cosine similarity, which measures the cosine of the angle between two vectors and ranges from -1 to 1. We chose cosine similarity because it is a relative metric, not an absolute like Euclidean distance, meaning that devices that show different absolute rates of behaviors, but at similar relativity, are classified as more closely related.
Device Identification. Table 3 shows a summary of the similarity between devices in our study. All similarities are positive, with the minimum being 0.477 between an Intel and AMD device. This is not surprising, since effective system stress is likely to reveal weak behaviors on many devices. However, the average and median similarities between devices from each vendor are higher than the overall average and median, showing that in general devices from the same vendor tend to have more similar MCS behaviors.
For Apple and NVIDIA, we confirmed that the maximum similarity occurs between identical GPUs: two Apple M1 Max’s and two NVIDIA GeForce RTX 3080s. For AMD, we observe a maximum similarity of 0.989 between two devices, one of which is a Radeon Pro 5500M (A) while the other device (B) did not report a model and instead only indicated that it was from the same architectural generation as A. However, we observed a high similarity (0.985) between A and another Radeon Pro 5500M (C), as well as a similarity of 0.984 between B and C, so it seems likely that A, B, and C are all the same device. We do a similar analysis with Intel to determine that an unknown device is most likely an Intel Iris Xe Graphics. While we are most interested in using this data to help choose conformance test strategies, as shown next, we also note that GPU MCS behavior data like this exposes a fingerprinting vulnerability, despite the specification trying to hide specific device information for security reasons.
Clustering Based Testing Strategies. K-means clustering attempts to minimize the distortion, or the sum of squared distances between vectors and a centroid. Applying k-means clustering to GPU MCS behavior has implications for testing strategies; when developing cross-platform GPU applications that rely on shared memory operations, testing these applications on a number of devices can increase confidence in the correctness of the implementation. A naive strategy might be to choose one device from each major vendor, but our results show that this is not necessarily an optimal strategy.
Table 4 shows the result of running k-means with six clusters on the similarity data from Tab. 3. The “elbow-method” heuristic
showed that the rate of decrease in distortion leveled off at 6 clusters on our data. The clustering data shows that devices from the same vendor are generally placed into the same cluster, but there are outliers in each case. The only NVIDIA Kepler device in our study was dissimilar enough from other devices that it was placed in its own cluster. Kepler is also the oldest NVIDIA architecture in our study, showing that special testing attention might be needed when supporting older devices in cross-platform GPU frameworks.
When selecting which devices to test a shared memory GPU application on, the strategy should be to first choose a number of clusters based on the rate of decrease in distortion, and then select at least one device from each cluster. Using our data, this might mean selecting an AMD device from A, Apple devices from clusters B and D, NVIDIA devices from clusters C and F (the only device in that cluster), and an Intel device from cluster E. Despite choosing more Apple and NVIDIA devices than AMD and Intel, the similarity data ensures the tests maximally cover devices with different behavior profiles.
### 5.3 Implementing Synchronization Algorithms
We now discuss a use case of how the diversity of weak memory profiles across these different GPUs can impact software development. Locking algorithms are implemented using atomic operations to synchronize access to critical sections. Implementation of locks depends on careful placement of memory fences to avoid compilers and hardware from reordering memory accesses, which can cause critical section failures. In this section, we implemented three common spin-locks: test-and-set (TAS), test-and-test-and-set (TTAS), and compare-and-swap (CAS). Each of these locks specifically needs to disallow MP behaviors using acquire/release memory fences. However, our results in Sec. 4 show that on some mobile devices MP weak behaviors never occur, meaning that if the locks are tested on these devices, they may run correctly despite being incorrectly implemented (according to the specification).
To investigate this, we tested our three locks on two Android devices, an Arm Mali-G78 and a Qualcomm Adreno 610. The locks were implemented both with and without appropriate acquire/release memory fences. In these tests, threads from different workgroups acquire the lock 10k times and increment a non-atomic memory location in the critical section. We ran this test for 1k iterations and recorded the number of critical section violations we observed for each device and each lock.
On the Arm Mali-G78, a larger GPU which exhibits a relatively high rate of MP behaviors, we observed critical section failures in unfenced versions of all three locks; in every failure case except one the value was 189,999 instead of 190,000, meaning that just one of the increments was not reflected. In the remaining failure case, the value was 189,998. On the Qualcomm Adreno 610, which exhibited no MP behaviors in our study, we saw no failures. Both devices exhibited no failures when locks were run with correct fences.
Therefore, when writing applications that require synchronization, care must be taken to ensure the application is tested on devices where incorrect implementations will lead to failures, highlighting the importance of collecting and characterizing MCS behavior data.
### 6 LESSONS LEARNED
In this section we discuss important lessons learned while developing and running our study.
**Ease of Use.** Technical studies of low-level details like memory consistency specifications [3, 25, 46] have been run by expert practitioners and involve installing special software (e.g. OpenCL/CUDA drivers) and running experiments from command line interfaces. However, experiments that solicit non-technical users require accessible and frictionless interfaces in order to collect many results. For example, we initially had users download their results and email them to us directly, but found that many users would not take this seemingly small step. Thus, we implemented a way to submit results by simply clicking a button. This required substantial engineering effort, both to set up a client/server infrastructure and to distribute the tools in a non-technical way (e.g. through web browsers/app stores). Once implemented this workflow also had the benefit of making our experiments standardized; instead of relying on users to configure their system and choose the right options, all of this was baked in so that users only had to click a few buttons to run and submit results.
**Testing Time.** Previous studies ran tests for hours or days, but it is unrealistic for volunteer users to run experiments that long on their devices. Therefore, we explored the trade-off space between experiment time and behavior coverage. Through trial and error, we determined parameters that allowed us to collect high-quality, standardized data in a short time frame, utilizing testing techniques from prior work that increased testing speed and provided statistical measures of reproducibility [28].
**Enabling New Research Questions.** Important research questions on memory consistency, including the three from Sec. 4, require performing a large-scale study. For example, previous studies [25] have attempted to create portable testing strategies, but could only provide limited guidance on choosing representative sets of devices to test on due to the small number of devices in their evaluation. On the other hand, our data shows that GPUs from different vendors can behave similarly under stress, and thus portability may not be vendor-specific. Therefore, increasing the scale of evaluation through faster and more accessible testing should be an important factor when developing new testing strategies for a diverse (and ever growing) set of devices.
Extensibility. When we first designed our LitGen tool, it only generated SPIR-V shaders. However, as we started focusing on testing WebGPU’s MCS, LitGen’s neutral configuration language (JSON) allowed us to easily write a backend generator for WebGL shaders. Ensuring our tools are extensible means that they might also be useful for researchers testing other areas of GPU specifications, e.g., floating point operation accuracy. In the same vein, our initial app only targets Android devices running Vulkan, but as we seek to expand the scope of our testing, we plan on developing an app that will work on both Android and iOS devices.
7 FUTURE WORK
This work has spent significant engineering effort enabling the testing of many different GPUs. However, given the difficulty of cross-platform GPU programming, we were still unable to test mobile Apple GPUs, which appear in some of the most widely used mobile devices. Additionally, our web interface and Android app contain distinct user interfaces and GPU setup code, causing duplicate efforts and maintenance. In this section, we outline a path forward, with Flutter as a fitting match for these goals.
Flutter [17] is an open-source software development kit developed by Google that provides deployment options to desktop platforms (such as Windows, macOS, and Linux), mobile platforms (Android, iOS), and even web deployment from a single frontend codebase. With a unified codebase for the MCS testing front end, development work can be focused on designing backend implementations specific to those platforms. Underlying Flutter is Dart [16], a language also developed by Google for cross-platform app development. For each supported platform, Flutter provides an interface to backend code native to the specific platform. On the Android end, GPU access is provided through Dart’s foreign function interface (FFI) library to load a dynamically linked C library, compiled against the version of Vulkan provided by Android’s Native Development Kit (NDK) [14]. The Dart FFI library can be used similarly on all supported platforms except for the web, for which GPU access will involve calls to JavaScript code utilizing WebGPU.
Vulkan, while well-supported on Windows, Linux, and Android devices, is not officially supported by macOS and iOS clients. For these platforms, there are two possible options. For a more native-friendly option, Vulkan backend code could be instead rewritten to depend on Apple’s Metal [6] API, with SPIR-V shaders transpiled to the Metal Shading Language (MSL) using SPIR-V-Cross [24], a tool developed by Khronos Group. However, to reduce development time and duplicate code across multiple platforms, Vulkan backend code can be passed through MoltenVK [25], a Khronos Group implementation of a large subset of Vulkan 1.2 on top of Metal. This provides a portability layer with which to run Vulkan applications on iOS and macOS platforms.
We also plan on integrating our new tools with the current server backend, allowing us to collect data from devices we do not have physical access to using a simple API interface. With a single source for interface design, GPU setup, and data collection, it is expected that future work will be able to deploy MCS testing at a wider scale and collect results from GPU hardware previously inaccessible in related work.
8 RELATED WORK
Testing MCSs. Work on testing MCS dates back to tools like ARCHTEST [49] and TSOTool [18], which each generated test programs containing sequences of loads and stores and then looked for violations of sequential consistency. With the introduction of formal MCSs, researchers developed tools like LITMUS [3], which runs litmus tests generated from formal models directly on ISAs (namely x86, Power, and Arm) and includes stress parameters that make weak behaviors more likely.
Techniques for CPU MCS testing have been extended to GPUs [1, 25]. Weak behaviors on GPUs are notoriously difficult to reveal, leading to work that statistically analyzed tuning techniques and reproducibility of results when running litmus tests on GPUs [25]. To better evaluate the efficacy of test environments and provide confidence in MCS implementations, [28] introduced a methodology based on black-box mutation testing [8], finding bugs in several WebGPU MCS implementations.
Previous studies have been limited in the number of devices they were able to test. In contrast, this study introduces tooling that allows us to conduct the largest ever GPU MCS testing campaign, running tests across 2 frameworks, 7 vendors, and 106 devices.
Testing at Scale. Other studies have tested large numbers of devices, searching for bugs in compilers and hardware. In [11], 17 GPU and driver combinations were tested for compiler bugs. Our approach, distributing the GPU MCS testing experiment using a web interface, is a form of volunteer computing, where the general public volunteers their computing resources for research studies. Volunteer computing has been used for many compute-intensive tasks, including searching for extraterrestrial life [5], training neural networks [10], sequencing genomes [40], and climate modeling [9].
9 CONCLUSION
We introduce GPUHarbor, a tool suite with a web interface and Android app for accessible cross-platform GPU MCS testing. We utilize GPUHarbor to perform a large-scale study on weak behaviors in 106 GPUs from seven vendors and find two bugs in GPUs running on mobile devices. Our results show the importance of scaling previous MCS testing strategies in order to characterize the behavior of different devices, perform conformance testing, and design application testing strategies.
ACKNOWLEDGMENTS
We thank the reviewers whose feedback helped strengthen the paper and motivated the lessons learned section. We thank Jan-Harald Frederiksen from Arm for working with us to confirm the bug on Arm’s devices, and Jeff Bolz from NVIDIA for finding and confirming the bug in NVIDIA’s compiler. We thank David Neto and Alan Baker from Google for feedback on the description of the WebGPU memory model and our results analysis. We thank everyone who submitted anonymous data for this study, including friends and family. This work was supported by a gift from Google.
REFERENCES
Received 2023-02-16; accepted 2023-05-03
|
{"Source-Url": "https://reeselevine.github.io/assets/pdf/gpuharbor.pdf", "len_cl100k_base": 11772, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43791, "total-output-tokens": 12402, "length": "2e13", "weborganizer": {"__label__adult": 0.0006537437438964844, "__label__art_design": 0.0007085800170898438, "__label__crime_law": 0.00040984153747558594, "__label__education_jobs": 0.0008664131164550781, "__label__entertainment": 0.00015413761138916016, "__label__fashion_beauty": 0.0003256797790527344, "__label__finance_business": 0.00030112266540527344, "__label__food_dining": 0.0004520416259765625, "__label__games": 0.0013189315795898438, "__label__hardware": 0.00836944580078125, "__label__health": 0.0008406639099121094, "__label__history": 0.0006003379821777344, "__label__home_hobbies": 0.000213623046875, "__label__industrial": 0.0010080337524414062, "__label__literature": 0.00038051605224609375, "__label__politics": 0.00037384033203125, "__label__religion": 0.0010366439819335938, "__label__science_tech": 0.22705078125, "__label__social_life": 0.0001170635223388672, "__label__software": 0.00818634033203125, "__label__software_dev": 0.74462890625, "__label__sports_fitness": 0.0005431175231933594, "__label__transportation": 0.0013093948364257812, "__label__travel": 0.0003299713134765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54717, 0.04497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54717, 0.53643]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54717, 0.92031]], "google_gemma-3-12b-it_contains_pii": [[0, 3113, false], [3113, 9276, null], [9276, 15080, null], [15080, 19716, null], [19716, 24438, null], [24438, 27784, null], [27784, 32758, null], [32758, 39001, null], [39001, 42202, null], [42202, 48039, null], [48039, 54527, null], [54527, 54527, null], [54527, 54717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3113, true], [3113, 9276, null], [9276, 15080, null], [15080, 19716, null], [19716, 24438, null], [24438, 27784, null], [27784, 32758, null], [32758, 39001, null], [39001, 42202, null], [42202, 48039, null], [48039, 54527, null], [54527, 54527, null], [54527, 54717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54717, null]], "pdf_page_numbers": [[0, 3113, 1], [3113, 9276, 2], [9276, 15080, 3], [15080, 19716, 4], [19716, 24438, 5], [24438, 27784, 6], [27784, 32758, 7], [32758, 39001, 8], [39001, 42202, 9], [42202, 48039, 10], [48039, 54527, 11], [54527, 54527, 12], [54527, 54717, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54717, 0.10968]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
9fc2385ff98b8ce88c58907f22a5ed14d786a106
|
Mixed-Initiative Approaches to Global Editing in Slideware
Darren Edge\textsuperscript{1,} Sumit Gulwani\textsuperscript{2,} Natasa Milic-Frayling\textsuperscript{3,} Mohammad Raza\textsuperscript{3,} Reza Adhitya Saputra\textsuperscript{1,4,} Chao Wang\textsuperscript{1} & Koji Yatani\textsuperscript{1,5}
\textsuperscript{1}Microsoft Research Beijing, China \textsuperscript{2}Microsoft Research Redmond, USA \textsuperscript{3}Microsoft Research Cambridge, UK \textsuperscript{4}University of Waterloo Waterloo, Canada \textsuperscript{5}University of Tokyo
\{daedge, sumitg, natasamf, a-moraza, chaowa\}@microsoft.com, radhitya@uwaterloo.ca, koji@iis-lab
ABSTRACT
Good alignment and repetition of objects across presentation slides can facilitate visual processing and contribute to audience understanding. However, creating and maintaining such consistency during slide design is difficult. To solve this problem, we present two complementary tools: (1) StyleSnap, which increases the alignment and repetition of objects by adaptively clustering object edge positions and allowing parallel editing of all objects snapped to the same spatial extent; and (2) FlashFormat, which infers the least-general generalization of editing examples and applies it throughout the selected range. In user studies of repetitive styling task performance, StyleSnap and FlashFormat were 4-5 times and 2-3 times faster respectively than conventional editing. Both use a mixed-initiative approach to improve the consistency of slide decks and generalize to any situations involving direct editing across disjoint visual spaces.
Author Keywords
Presentations; visual consistency; layout editing; snapping; programming by example; least-general generalization.
ACM Classification Keywords
H.5.2. Information interfaces and presentation: User Interfaces.
INTRODUCTION
Designing, delivering, and watching slide presentations are common aspects of professional life. As a modular authoring medium, slides constrain the problem of visual communication and afford flexibility in restructuring and reuse. However, this same modularity can result in weak connections between slides that are also visually inconsistent.
The theory of processing fluency proposes that aesthetic pleasure is based on perceptual processing, with high fluency associated with positive evaluations such as agreement [33]. For slide design, audience negativity toward visual disorder (e.g., clutter, inconsistency, misalignment) has been argued to result directly from reduced processing fluency [1].
Nancy Duarte, author of “Slide:ology” [10], recommends the reuse of slide layouts to help the audience anticipate where content will appear next (i.e., to increase processing fluency through repetition). Garr Reynolds, author of “Presentation Zen” [34], also recommends repetition, but of “certain design elements” rather than whole slide layouts. The use of slide templates, copied slide elements, and temporary grids can all help to avoid inconsistency problems, whether within slides (nonaligned elements appearing randomly placed), between slides (misaligned elements “jumping” on transition), or across the deck (related elements lacking consistent styling). However, creating and maintaining visual consistency across slides is difficult when the desired layouts and styles are not known in advance. During the design of slide visuals, making systematic changes one element at a time is both repetitive and tedious, as well as making it difficult to refactor slide content into slide templates. Lack of support for the consistent redesign of elements repeated across slides is thus a major contributing factor to low processing fluency.
To address both the consumption problem of processing fluency and the authoring problem of repetitive styling, we present two complementary tools: (1) StyleSnap, for the automatic alignment of object edges within and across slides; and (2) FlashFormat, for the systematic restyling of related objects. Both are mixed-initiative systems [19] in which users collaborate with intelligent services to achieve their goals.
The intelligence of StyleSnap is through progressive hierarchical clustering of each of the four object edge positions (offsets from the left and top slide edges), in ways that increase alignment both within and across slides without introducing object overlaps or image distortions. Following an automated snapping process, the user can independently unsnap, merge, and style the resulting groups of objects that have been snapped to the same slide extent (i.e., all four corresponding edges share the same position values).
Conversely, the intelligence of FlashFormat comes after the user has directly supplied some examples of the repetitive edit they wish to apply more generally. It infers the least-general generalization [32] of the example edits through the attribute values shared among all edited objects and the transformation performed on the edited objects. The next application of FlashFormat always extends the current edit to the smallest set of objects that can be generalized to by allowing the unshared attributes to vary freely.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
CHI 2015, April 18 - 23 2015, Seoul, Republic of Korea. Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3145-6/15/04 ...$15.00
http://dx.doi.org/10.1145/2702123.2702551
Since StyleSnap increases the number of shared attribute values throughout a slide deck (via both snapped object positions and propagated style changes), prior use of StyleSnap means that fewer example edits are required to cover all of the variance in the set of objects to be edited with FlashFormat. The two tools are thus complementary, although each has substantial value in standalone use.
In this paper, we first present a literature review on snapping and formatting, as well as the tools available in commonly-used graphical systems. We then present an examination of a PowerPoint slide-deck corpus, showing how the use of such templates in practice restricts their ability to create and maintain consistency. In the following sections, we describe the design and implementation of two tools – StyleSnap and FlashFormat – that address the limitations of conventional templates. We also present a user study of each, demonstrating their superiority over existing approaches in task performance time and user preferences. We then report the lessons learned from both systems and discuss improvements for future mixed-initiative editing systems.
In this paper, as well as any visual software that requires consistent styling across multiple pages (e.g., word processor documents), grids (e.g., spreadsheets), frames (e.g., when editing video overlays), or repeated visual spaces of any kind.
Overall, this work offers the following contributions:
1. Two approaches to selecting objects for systematic editing, based on: (a) similarity of objects’ spatial extents as determined by clustering of their edge positions (StyleSnap); and (b) similarity of objects’ attribute values as determined by the least-general generalization of example edits performed on other objects (FlashFormat).
2. Two approaches to simultaneously editing objects to reach a state of consistency, based on (a): snapping misaligned object edges into a reduced number of aligned positions, without introducing artifacts (StyleSnap); and (b) applying the least-general generalization of example edits to all objects selected for editing (FlashFormat).
RELATED WORK
Microsoft PowerPoint remains the most popular application in the category of “slideware” it established. PowerPoint also has a wealth of existing presentations available online, rich tools for the analysis of its XML-based documents, and an add-in framework for augmentation of the PowerPoint application. We therefore take PowerPoint as representative of slideware in the development and evaluation of our tools.
Alignment of Document Objects
Snapping is a commonly-used technique to guide objects into a state of alignment [5]. For example, PowerPoint 2013 provides visual line-guides and snapping behavior to support object alignment within a slide, as well as the traditional snapping of object edges to an underlying grid whose resolution can be customized. Object snapping can be extended in several ways, such as adaptively changing snapping behavior based on user input [26] or adaptively inserting motor space to support snap target differentiation [4]. Other approaches have addressed mouse-free snapping on surfaces and tabletops by using control-display gain [13], multi-touch [15], pen input [14], and the non-dominant hand [40]. None focus on cross-slide object alignment in slideware or alignment across disjoint visual spaces of any kind.
Layout of Document Objects
One common approach to achieving consistent, well-designed layouts is to allow the user to select from a range of predesigned “templates” and enter content accordingly. The idea is that rather than making layout changes on individual slides (which can create inconsistencies), the user makes any changes on the template itself, automatically updating slides which adopt that template. In both PowerPoint 2013 and Keynote 6.2, such templates are provided in a special view called the Slide Master. This view defines slide layouts and styles through placeholders, which are also exposed to the user when they use menus to select a layout for a new slide or to update a slide to an alternative Slide Master layout. However, there are several usability problems with this approach. First, templates need to be decided in advance of slide creation rather than discovered through exploration on slides. Second, directly editing a slide object linked to a placeholder removes the ability to restyle that object through the Slide Master, unless the slide template is manually reapplied. When inspecting either an object or a slide, the inability to determine whether the placeholder links are intact makes it difficult to anticipate the scope of changes that are possible through slide templates. These problems belong to a more general category of risks that are known to reduce users’ willingness to invest attention in abstraction use [6].
In the research domain, document analysis has been used to suggest layouts that satisfy criteria given by the user [24], such as logical structures of information to be visualized in presentation slides [38], or supports version management of multiple slide decks [9]. An alternative approach is to directly specify the structure of the desired document and allow the system to provide “styling as a service” as in the HyperSlides system for presentation prototyping [11].
One approach to automated layout is to consider it as a visual constraint satisfaction problem. Constraints express high-level relationships between objects (e.g., text referencing a picture) or geometric structures (e.g., the sizes of all textboxes are the same). These constraints can be described as rules [7, 16, 39]. A major challenge of such rule-based systems is to anticipate all required rules. Another approach is to consider it as a problem of energy function optimization. This has been explored in the context of adaptive grids [20], driven by considerations such as goodness of template fit and micro-typography aesthetics, but is not always predictable for dynamic content such as news feeds. The concept of “conditional shapes and groups” can help to dynamically generate more flexible constraint systems [36], as can combinations of constraint- and force-based approaches [2].
While such automated layout systems are good for content that would not otherwise be “designed” or is yet to be designed, our tools specialize in the restyling of existing presentation content with a high degree of user control.
**Formatting of Document Objects**
The Slide Master is a form of indirect editing in PowerPoint and similar slideware. Another is the Format Painter, which allows all non-spatial attributes of a source object to be copied and “painted” onto destination objects, supporting reuse of object formatting in the slide view.
Macros, batch processing, history brushes, and graphical search and replace [22] are yet further ways in which users can capture and reuse action sequences. EAGER [8] is an early work using programming-by-example principles [29] to support efficient text data entry, but does not cover visual style changes. Abstract object selection and restyling is also possible with interactive machine learning [12], by inferring the user’s desired scope based on patterns of selection and deselection (e.g., for file [35] and friend [3] selection).
For document editing, the LAPIS text editor [27] supports intelligent group selection and simultaneous reformating of strings. An extension supports intelligent find-and-replace operations in text documents, grouping different string selection candidates by literal and semantic similarities [28].
**CORPUS ANALYSIS OF TEMPLATE USAGE**
We wanted to understand the extent to which the apparent usability problems of Slide Master templates were evident in examples of presentations downloaded from the Web. We built a corpus of over 8000 presentations using Bing web searches specifying the “ext:pptx” filter. The resulting slide decks were drawn from many fields including business, government, science, technology, and education. We used the Open XML SDK 2.5 [30] to parse these presentations and extract statistics on slides and objects. Table 1 shows the results from the 7663 successfully processed files.
We found that 88% of slides (24.7 out of 28.2) on average contained placeholder objects created by the Slide Master. We also found that the proportion of objects per slide that could actually be restyled through the Slide Master (without first resetting the slide to restore broken links, which loses any custom layout and styling of placeholders) was only 21% (1.1 out of 5.3) on average. That is to say, while the vast majority of slides (88%) have the potential to be updated via the Slide Master, user editing behavior means that the vast majority of objects (79%) require manual editing.
We conducted a second analysis to understand the degree to which objects shared the same or similar spatial extent on slides, corresponding to sharing the same or similar edge offsets (distances from the left and top slide edges). For each presentation, we first created a mapping from extents to objects, progressively varying the matching tolerance from 0 to 100 points in 10 points increments (28pt = 1cm). In each iteration, we grouped objects across slides whose edges were all located within a matching region of an existing extent.
<table>
<thead>
<tr>
<th>Slide Statistics</th>
<th>Mean (SD)</th>
</tr>
</thead>
<tbody>
<tr>
<td># of slides</td>
<td>28.2 (20.7)</td>
</tr>
<tr>
<td># of slides with placeholders unmodified</td>
<td>19.0 (17.5)</td>
</tr>
<tr>
<td># of slides with placeholders modified</td>
<td>5.7 (9.0)</td>
</tr>
<tr>
<td># of slides without placeholders</td>
<td>3.5 (7.8)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Shape Statistics</th>
<th>Mean (SD)</th>
</tr>
</thead>
<tbody>
<tr>
<td># of objects per slide</td>
<td>5.3 (6.1)</td>
</tr>
<tr>
<td># of objects created by user</td>
<td>3.9 (6.2)</td>
</tr>
<tr>
<td># of objects from unmodified placeholders</td>
<td>1.1 (0.9)</td>
</tr>
<tr>
<td># of objects from modified placeholders</td>
<td>0.3 (0.4)</td>
</tr>
</tbody>
</table>
Table 1. Statistics on the number of slides and objects in 7,663 PowerPoint files we collected from the Internet.
(e.g., for the matching tolerance of 0, we only grouped objects that had the exact same extent).
This analysis revealed a strong skew resulting from many small, often single-element groups. With exact matching (a tolerance of zero points), the average number of position groups was 35 and objects in the largest position group occurred on 82% of slides. All groups in the upper quartile (top 9 groups of 35) had objects occurring on at least 5% of slides. Since these cannot all be placeholder objects, they are likely to come from copy-and-paste used to ensure size, position, and style consistency for individual objects (and also slides). When the tolerance was 60 points (about 2cm—a “near match” on all four object edges), the number of overall position groups halved (from 35 to 18). Objects in the largest position group occurred on 93% of slides and all position groups in the upper quartile (top 5 groups of 18) had objects occurring on at least 11% of slides.
In conclusion, there is ample opportunity to increase the positional consistency of objects by mapping “near match” objects into the exact same extent. If such a system could be developed, it would also create an opportunity to increase the style consistency of position groups by propagating style changes to all members of the associated group. Such a system would have greater restyling power than the Slide Master while also offering implicit object templates abstracted directly from slides, rather than explicit slide templates designed indirectly in a separate view. Many of the problems of template-based layouts could thus be avoided.
**STYLESNAP**
Following on from the previous corpus analysis, we designed a mixed-initiative tool called StyleSnap that can be applied whenever a slide deck has evolved into a state of misalignment or inconsistency. Our first goal was to develop a method to align objects across slides without introducing new problems, such as object overlap and image distortion. Our second goal was to develop a user interface that would allow a user to invoke StyleSnap, view the resulting groups of objects snapped to the same extent, then undo, merge, or modify these groups accordingly. Our implementation of these concepts was through an add-in for PowerPoint 2013. We now describe the high-level design of the StyleSnap user interface and details of the underlying snapping algorithm.
**StyleSnap Interface**
Figure 1 shows the result of pressing the “StyleSnap” button in the PowerPoint ribbon menu. A side pane appears showing the resulting position groups – groups of objects across slides that have been mapped to the same position and size as a result of clustering and snapping edge values for the four edge types. Each group is listed showing its color (matching the color of the highlight boxes added to the corresponding objects), the number of objects in the group, a “Style Painter” icon for manually adding objects to the group and merging groups with one another, and a checkbox indicating whether the group is currently in its “Snap” state. Only non-singleton object groups are snapped by default, but the user can toggle snapping for both individual groups and for all objects. Snapped objects are shown with a solid border in their new extent; unsnapped objects with a dashed border in their original extent. If all objects were already in the same extent and unaffected by snapping, the Snap checkbox is disabled.
Clicking on any slide object or position group highlights the object, its position group in the side pane, and all other objects in the group. For efficient visual browsing of this position group, the system also gathers all slides containing highlighted objects and places them in a temporary “Selected” section at the start of the slide list, with other slides organized in an “Unselected” section (existing sections are recreated after StyleSnap tool use). The color of the selected group’s highlight boxes also fully saturates for clear differentiation from the colors of other object groups. This visual feedback allows the user to quickly confirm whether the automatic snapping of the position group is desirable. When using the StyleSnap tool, any location, size, font, or other format changes to an object are propagated across all other objects in its group and made visible in real-time by simultaneous updates to all slides in the “Selected” section of the slide list.
The Style Painter is similar to the Format Painter but extends to position and size object attributes. Activating the Style Painter for a particular group by clicking the corresponding icon means that the next object or position group selected will automatically be merged with the active group.
Clicking the “Apply” button saves the changes to the deck and reverts to standard editing. Clicking “Discard” undoes the changes and reverts the deck to its pre-StyleSnap state.
**Snapping through Hierarchical Clustering**
We wanted to be able to “snap” objects into (a) fewer extents with (b) better alignment across slides. Each of these goals suggests a different approach. To reduce object extents, we could apply hierarchical clustering \[18, 37]\ directly to object extents (since unlike, e.g., K-means, it does not require a prior choice for the number of clusters). However, this will not result in good cross-slide alignment if the extents of objects within slides are misaligned. There is also a sparsity problem – objects may all be sufficiently different that no two objects should be mapped to the same extent, even though any individual object edge may be close to the corresponding edges of many other objects.
Figure 1. StyleSnap interface. The right pane shows all the groups whose objects are moved to the same position as a result of snapping. The user can manually undo and merge position groups, as well as simultaneously styling all objects in the same position groups with real-time feedback in the slide list.
This led us to a solution based on hierarchical clustering of not object extents, but individual object edges. That is, we independently perform hierarchical clustering of all the left, top, right, and bottom edge positions of the objects within a presentation, then apply the results back to those objects. The result of such hierarchical clustering for each of the four edge types is a hierarchy of depth \(N\) where \(N\) is the number of distinct edge positions. At level 1 in the hierarchy, there are \(N\) clusters of 1 edge position. At level \(N\) in the hierarchy, there is 1 cluster of \(N\) edges, all snapped to the modal edge position. From level 1 to level \(N\), the algorithm always merges the closest pair of clusters and snaps all edges in the new cluster to the new modal edge position.
We can determine the optimal clustering level for each edge type (left, top, right, and bottom) by an energy function that aims to balance similarity within and between clusters (i.e., to make close edge values the same while keeping distant edges separate). However, the naïve updating of object edge positions based on the optimum clustering of each edge type could easily cause problems in visual appearance, such as:
1. **Object inversion**, since independent edge clustering does not respect relative positions of opposing object edges;
2. **Object overlap**, since initially separated objects can be moved into a state in which their content regions overlap;
3. **Image distortion**, since images are especially sensitive to aspect ratio changes.
Given that we can identify and correct these problems only after position updates, our divide-and-conquer approach of clustering different edges independently is most suited to the agglomerative, bottom-up method of hierarchical clustering supported by the SLINK algorithm \[37\]. Overall, our snapping approach is summarized in Algorithm 1.
**Systematic Performance Evaluation of Snapping**
Even with around 500 objects (the 97th percentile in our PowerPoint corpus), snapping completes in several seconds. To evaluate the degree to which our automated snapping matches human judgments, we recruited two professional
software engineers to test its accuracy and reliability across a wide range of typical decks. They tested 80 presentations from our PowerPoint corpus, with varying numbers of slides (15 – 36; the lower and upper quartiles) as well as varying numbers of objects per slide (2.5 – 6.0 with mean 4.7).
Table 2 shows how the number of singleton position groups (a mean of 53.6) forms a long-tail, as expected from the earlier corpus analysis. Problems resulting from snapping are well controlled in both group types, limited to overly shrinking objects, creating inconsistencies among sets of slide objects that were previously consistent (e.g., in size, spacing, or alignment), and breaking spatial relationships between objects (e.g., by moving arrows within diagrams). Shrinkage can be dealt with by some simple additional rules, but more complex rules or manual object grouping would be needed to resolve problems arising from object relationships. Overall, only 4.5 position group modifications on average were required to reach a satisfactory state of alignment.
**EVALUATION OF STYLESNAP**
We conducted two user studies to evaluate the performance of StyleSnap against two alternative approaches to cross-slide alignment and styling: Repeat Editing and Slide Master. For the study tasks we selected a slide deck from our internet corpus with an average of 2 objects on each of 31 slides – one from a Slide Master template, one added manually – with a balance of title-and-bullets and image-and-caption slides.
**Task: Cross-Slide Alignment of Misaligned Objects**
There were 14 slides containing one picture and one textbox used as a caption, with no perfect alignment of any pair of objects. The task was to align all four object edges of groups of images with similar aspect ratios and make the caption format the same across all 14 slides. For consistency, we defined target object groups for the 14 images based on three aspect ratios: 9 portrait images, 4 landscape, and 1 panoramic. We also set the target text format to be 20pt Red Italic Arial.
**Expert Performance Prediction**
To predict the expert performance of each technique, we adopted an approach akin to Keystroke Level Modelling (KLM) by constructing a task model for each system and quantifying its time parameters through a user study. Since these models do not account for switching costs, they represent predicted lower bounds on task completion time.
**Repeat Editing:** [Start time $T_{re}$] Make a duplicate of Slide 13 and use it as a template to align content from Slide 14. Delete unwanted objects and slides. [End time $T_{re}$]
Reference task time $= T_{re} \times 13$ slides
**Slide Master:** [Start time $T_{sl}$] Open the Slide Master view and create a layout with content placeholders in the desired positions and aspect ratios. Close the Slide Master. [End time $T_{slr}$] [Start time $T_{slu}$] Go to Slide 13 and update it to the new custom layout. Delete unwanted objects [End time $T_{slu}$]
Reference task time $= T_{slr} \times 3$ slide layouts (one per aspect ratio) + $T_{slu} \times 14$ slides (apply new layout to each)
**Algorithm 1. Global snapping of object edge positions**
1. **Cluster object edge positions in preparation for snapping**
For each edge type $T$ (left, top, right, bottom):
a. Cluster edge positions using the SLINK algorithm
b. Calculate the optimal clustering level $L_i$ to minimize the following distance-based energy function:
$$a \sum_{o \in C_{g}} (C_{o} - C_{global}) + (1 - a) \sum_{o \in C_{g}} \sum_{o \in C_{g}} (l_{i} - C_{o})$$
$C_{o}$, $C_{global}$, and $l_{i}$ represent the centroid edge value of each cluster, the one of all objects, and the edge value of one object, respectively. $a$ is the coefficient to determine weights for between-cluster and within-cluster similarities, set to 0.5
2. **Apply snapping progressively, resolving overlaps and inversions**
For $i = 1$ to $N$ (number of adaptive steps, e.g., 20):
a. For each edge type $T_i$:
i. Apply the snapping associated with clustering at level $[(i/N) \times L_i]$ (where level 1 is no clustering) to objects that have not previously been inverted or overlapped.
b. For each object $O_{j}$:
i. If $O_{j}$ is newly inverted or overlapped, reset to $O_{j-1}$.
c. If no objects can be snapped further, move to (3).
3. **Resolve image distortion**
For each snapped image object $i$:
a. If its aspect ratio of $l$ has changed by more than a threshold proportion (set to 0.1), shift the furthest moved edge of $l$ to return to the original aspect ratio, otherwise allow the change.
**Table 2. Performance of StyleSnap auto-alignment on 80 decks.**
<table>
<thead>
<tr>
<th>Position Groups of 2+ Shapes</th>
<th>Mean (SD)</th>
</tr>
</thead>
<tbody>
<tr>
<td># of position groups</td>
<td>10.3 (10.0)</td>
</tr>
<tr>
<td># of position groups where snapping causes problems</td>
<td>0.7 (1.2)</td>
</tr>
<tr>
<td># of merges among 2+ position groups required</td>
<td>0.8 (1.1)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Position Groups of 1 Shape (Singletons)</th>
<th>Mean (SD)</th>
</tr>
</thead>
<tbody>
<tr>
<td># of position groups</td>
<td>53.6 (33.7)</td>
</tr>
<tr>
<td># of position groups where snapping causes problems</td>
<td>0.6 (2.5)</td>
</tr>
<tr>
<td># of additions to 2+ position groups required</td>
<td>2.4 (2.8)</td>
</tr>
</tbody>
</table>
**StyleSnap:** [Start time $T_{st}$] Use the Style Painter to add the portrait image on Slide 28 to the position group of the portrait image on Slide 13. [End time $T_{st}$] [Start time $T_{st}$] Set the caption text to the target format. [End time $T_{st}$]
StyleSnap groups the captions into a single group and images into groups with 4, 4, 2, 1, 1, 1, and 1 objects. Reaching three position groups for these images requires a maximum of eight changes using the Style Painter (one object at a time).
**Reference task time** $= T_{st} \times 8$ group changes (to make 3 groups) + $T_{st}$ (applies to all captions at once)
**Procedure**
We first explained and demonstrated each task before asking participants to repeat the procedure until they had mastered it. We used two such slides for our demonstration, with the task to make the layout and style of Slide 14 the same as Slide 13. We offered detailed instructions for each approach and allowed several timed trials until we did not observe large time variances. Participants’ best times were recorded and used to calculate the reference task time (as is common when seeking to understand expert performance, e.g., in [40]).
We recruited 12 participants (6 male and 6 female, the average age of 25; PA1–PA12) from a local university. All were familiar with PowerPoint and fluent in English. We counterbalanced the condition order. Cash equivalent to $15 USD in local currency was offered as compensation.
Expert Performance Results
Table 3 summarizes the completion times of sub-tasks in each of the three techniques. Entering these times into our task models predict the average expert performance to be 353, 322, and 107 seconds with Repeat Editing, Slide Master and StyleSnap respectively. This prediction is highly promising and indicates that StyleSnap could substantially reduce the user effort of cross-slide alignment and styling. The results also show that updating any number of target slides with StyleSnap is comparable to refactoring a single slide with Slide Master, once the StyleSnap groups are correct and the Slide Master templates are created. Since 4.5 StyleSnap group merges are required to correct the snapping of a whole deck, on average, it also means that StyleSnap is about as fast at correcting alignment throughout this particular deck and refactoring 14 slides of a particular design (69s) as Slide Master is at creating a single template and refactoring just two slides (71s).
Novice User Performance Measurement
We also conducted a supplementary study with the same task set to measure the performance of novice users. The procedure was also the same except that participants were asked to modify all 14 slides in each task. This study therefore offers realistic performance observations on the three techniques that account for the effects of learning and fatigue. From the expert performance prediction results, we had the two hypotheses for this study: [HA-1] StyleSnap would be faster for global alignment than existing tools; and [HA-2] the perceived workload of global alignment tasks would be smaller with StyleSnap. StyleSnap training included 5-10 minutes of free exploration on a range of decks. Following the study tasks, participants completed a NASA-TLX questionnaire [17] for each approach. We recruited another eight participants for this study (5 male and 3 female, the average age of 25; PB1–PB8) from our research institute.
Novice User Performance Results
One-way repeated-measure ANOVA revealed a significant difference among techniques (F(12,14) = 90.8, p < .0001, η² = .93), with StyleSnap faster than the other two (p < .0001 for both), shown in Figure 2, supporting HA-1. Analysis of NASA-TLX responses found seven significant pairwise results as shown in Table 4, partially supporting HA-2.
While the cost of both Repeat Editing and the Slide Master scale linearly with the number of target objects, the cost of StyleSnap scales only with the number of corrections to groups containing target objects. The fewer groups that need to be corrected, the closer the performance of StyleSnap will get to a constant time cost, whatever the number of objects.
<table>
<thead>
<tr>
<th>StyleSnap Task for Expert Users: Completion Times</th>
<th>Mean (SD)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T_re: Refactor a target slide using Repeat Editing</td>
<td>25.2 (6.4)</td>
</tr>
<tr>
<td>T_sm1: Make a slide template using Slide Master</td>
<td>43.2 (9.6)</td>
</tr>
<tr>
<td>T_sm2: Refactor a target slide to a slide template using Slide Master</td>
<td>13.8 (2.1)</td>
</tr>
<tr>
<td>T_ss1: Find and add an object to a group using StyleSnap</td>
<td>11.5 (2.7)</td>
</tr>
<tr>
<td>T_ss2: Update all target slides using StyleSnap</td>
<td>15.2 (5.6)</td>
</tr>
</tbody>
</table>
Table 3. Sub-task completion time results in the expert user performance study.
Figure 2. Predicted lower bounds on expert performance (bars without borders) and novice user mean performance time (bars with borders). Error bars represent 95% Confidence Intervals.
Qualitative Discussion
Participants in both studies unanimously preferred StyleSnap over Repeat Editing and Slide Master and many commented on how it would support their everyday authoring. The Slide Master was also widely criticized, e.g., “If we edit using Slide Master, we really don’t know what the final result will look like. After we edit it we should go back to see the result… if is not satisfying we should go back again to the slide master view… If we use StyleSnap we can see the result directly” (PA12).
The cost of automated snapping and manual modification was perceived to be relatively low. Participants described the “clustering” as “very efficient… it really helps in saving time” (PA4); “really useful” (PA6); “a great idea… a simpler way to edit a lot of shapes” (PA9). Participants also appreciated being able to simultaneously edit all objects in a position group: “I can do quick edits for the whole content in the slides” (PA5); “You can do one ‘style’ and apply it everywhere fast” (PA2).
The study also highlighted areas for improvement. Several participants expressing initial confusion about the meaning of colors, suggesting use of semantic descriptors, icons, or thumbnails to make the mapping clearer. One participant also found the “shuffling” of the slide list distracting and another suggested using a drag-and-drop mechanism for Style Painting. From their free exploration with multiple decks, multiple participants also expressed a desire for the algorithm to automatically identify diagram-like collections of objects and group them for StyleSnap alignment purposes.
<table>
<thead>
<tr>
<th>StyleSnap Task for Novice Users: NASA-TLX Subjective Workload Results</th>
<th>Mean (Median, 25/75th quartile)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T_re: Refactor a target slide using Repeat Editing</td>
<td>25.2 (6.4, 11.5, 43.2)</td>
</tr>
<tr>
<td>T_sm1: Make a slide template using Slide Master</td>
<td>43.2 (9.6, 22.5, 52.5)</td>
</tr>
<tr>
<td>T_sm2: Refactor a target slide to a slide template using Slide Master</td>
<td>13.8 (2.1, 5.0, 27.5)</td>
</tr>
<tr>
<td>T_ss1: Find and add an object to a group using StyleSnap</td>
<td>11.5 (2.7, 10.2, 13.1)</td>
</tr>
<tr>
<td>T_ss2: Update all target slides using StyleSnap</td>
<td>15.2 (5.6, 14.0, 15.5)</td>
</tr>
</tbody>
</table>
Table 4. NASA-TLX subjective workload results for novice users completing the StyleSnap task. Lower values are better.
The study also surfaced tensions in the design of StyleSnap, which is predicated on the value of object alignment and style consistency throughout a slide deck. As one participant explained, “With StyleSnap I can easily design the layouts, especially if I want to change the same layouts, not manually edit the slides. But I think it will be difficult to change if on each slide the designs are different” (PA8). A different approach is required to make repetitive changes to objects that do not share the same position – this is the purpose of the complementary FlashFormat tool that we present next.
**FLASHFORMAT**
StyleSnap supports aligning objects across slides in ways that increase processing fluency, but it is not appropriate for repeated-object restyling when target objects are placed at different locations. FlashFormat offers the ability to apply global style changes with more flexible object selection.
FlashFormat is a programming-by-example system [29] that allows the user to perform repetitive formatting tasks in PowerPoint. The interface of FlashFormat is shown in Figure 3. It consists of only two buttons: “Start New Examples” and “FlashFormat”. The user starts by clicking “Start New Examples” and then gives some examples of the formatting changes they would like to perform. They can then click “FlashFormat”, at which point the system infers the least-general generalization (LGG) from the given examples and applies to the rest of the document (“FlashFormat-all”).
In Figure 3b the user gives two examples of changing diamond shapes to the color yellow, for which the system infers the LGG that changes all diamond shapes to yellow (Figure 3c). The inferred generalization depends on the given examples; for instance, if the user instead desires to color any object with underlined text, then they may give examples with different shapes and colors (e.g., white diamond, white rectangle, gray rectangle) containing underlined text.
The inference performed by FlashFormat is based on the XML specifications of objects (using the OOXML file format), and hence covers all properties expressed in this specification. The system is based on a domain specific language for expressing transformations on XML structures, and a synthesis algorithm for inferring LGG programs within this language. In this work, we focus on the interaction model and usability studies on the tool. The underlying inference algorithm builds on prior work in program synthesis [32].
**An interactive and incremental generalization process**
The user can guide the system to the desired generalization in an interactive and incremental fashion. Since the system is conservative in the inference of the generalization, the user can give additional examples that are not very similar to cover their intended selection criteria. For example, if the user wants to color all objects containing underlined text and only gives examples of diamond shapes, then the system may infer the less general hypothesis of coloring only diamond shapes containing underlined text. However, the system is designed to incrementally accept examples, so at this point the user can give more examples through manual editing, and then FlashFormat again. The system will incorporate the new examples to expand the generality of the transformation.
The user can also backtrack if the system goes wrong at some stage. If the generalization inferred is not applicable to any other objects in the document, then the system applies the transformation to the closest matching objects, according to a similarity measure on the XML specification of objects. If this inference is inaccurate, then the user can undo the changes using the standard “undo” feature, and provide more examples to guide the system in the right direction.
The system also offers more restricted applications of the inferred transformations, which allows the user to verify the transformation before applying globally. After giving examples, the user can select a set of objects or slides and then click “FlashFormat” to only apply the transformation to the selected objects or slides (“FlashFormat-selected”).
**EVALUATION OF FLASH FORMAT**
We conducted another user study to examine use of FlashFormat. More specifically, we had three hypotheses: [HB-1] Users would be able to achieve their desired global restyling through FlashFormat; [HB-2] experience of using FlashFormat would develop a sense of which examples to give, and [HB-3] using FlashFormat would be faster than standard editing tools across a range of tasks.
**Tasks: Cross-slide Shape Restyling**
We prepared a slide deck downloaded from the Internet. It contained a flowcharts spanning five slides using different shapes, such as rectangles and diamonds. Such diagrams are commonly encountered in slide presentations and cannot be restyled through the slide master because objects invariably occupy unique slide extents. Participants were asked to
perform two changes specified by the experimenter either with FlashFormat or manually (use of Format Painter was allowed in this condition). In each task, we varied the number of objects to be restyled as well as the difficulty of example giving to create an approximately balanced workload per task. The participants were asked to make two systematic style changes in each trial, but were not allowed to change any other visual attribute or text content of the objects. After this controlled task, participants were given another slide deck and asked to perform global restyling as they liked. They were encouraged to use FlashFormat to make changes and were given five minutes for this part of the study.
The interface used in this study did not include visual feedback before clicking the FlashFormat button. Our intention was to study how well participants could understand the behavior of LGG without being influenced by the feedback design and test its effectiveness in a most basic form (any additional improvements on feedback would also generally favor the performance of FlashFormat).
Procedure and Participants
We first explained FlashFormat with two examples of slide decks and asked participants to perform global restyling tasks. The slide decks included easy and difficult cases for FlashFormat. This pre-task session was intended to make participants knowledgeable about the system and able to perform restyling without needing help from the experimenters. We provided explanations that choosing more and more diverse examples would lead to better results (referred to as “the golden rule”), but we did not force participants to do so. During each trial, we measured the performance time between when the participants started a task and when they confirmed all necessary changes on all specified objects. At the end of the study, participants were asked to describe their experience of FlashFormat and future improvements they would like, and to fill out a questionnaire.
We recruited 12 participants (8 male and 4 female, with average age 25; PB1–PB12) from our research institute. All were familiar with PowerPoint and fluent in English, and none of them participated in the first study. The same compensation was offered to all participants in this study.
Results
Figure 4 shows the mean performance time for each number of objects with the two techniques. Times in the manual editing condition varied in the range of 90-100 seconds, whereas FlashFormat times were constant at about 40 seconds. Two-way repeated measure ANOVA revealed that the technique had a significant main effect ($F_{(1,11)} = 45.5, p < .0001, \eta^2 = .81$). The interaction of the technique and the task was also significant ($F_{(2,22)} = 4.40, p < .05, \eta^2 = .29$). The post-hoc analysis on the interaction effect confirmed that FlashFormat was significantly faster than the manual editing condition ($p < .0001$). This quantitative result demonstrates substantial improvements on global restyling over existing editing tools and methods, supporting our hypothesis HB-3.

<table>
<thead>
<tr>
<th>Questions</th>
<th>Mean response (SD)</th>
</tr>
</thead>
<tbody>
<tr>
<td>I could always give appropriate examples to reach my desired end state.</td>
<td>5.1 (1.6)</td>
</tr>
<tr>
<td>I could anticipate the effects of FlashFormat before actually doing it.</td>
<td>4.6 (1.4)</td>
</tr>
<tr>
<td>It is necessary to anticipate the effects of FlashFormat to successfully use the tool.</td>
<td>5.8 (1.2)</td>
</tr>
<tr>
<td>It is annoying to repeatedly undo unwanted effects from FlashFormat.</td>
<td>4.3 (2.1)</td>
</tr>
<tr>
<td>I would prefer to use FlashFormat rather than make repetitive changes one-by-one.</td>
<td>6.5 (0.8)</td>
</tr>
</tbody>
</table>
Table 5. Responses of the post-experimental questionnaire. (1: strongly disagree – 7: strongly agree)
Table 5 shows the questionnaire results. Participants responded positively to their experience with FlashFormat, unanimously agreeing that FlashFormat was preferable to manual editing. As with StyleSnap, many participants commented on how it would support their regular authoring practices, especially with regard to diagram formatting. We further examined the qualitative study data to understand the reasons for these positive results.
Qualitative Analysis
In the pre-task session, participants exhibited a near-universal tendency to give one example, repeatedly FlashFormat-all until the point of over-generalization, undo, and repeat. That is to say, participants did not follow the golden rule despite being reminded after each undo action. However, as the system repeatedly failed to produce their intended generalization from a single example, participants began giving more examples. At first, these were typically two examples given on the first two slides where they were applicable, but we observed a gradual shift to a more systematic selection of diverse examples, as initially suggested in the golden rule. Not only did participants learn to follow the golden rule through experience, they also learned for themselves what it means for examples to be diverse and how to give enough depending on the situation:
“Doing things automatically is good but sometimes I need to find out the differences of the shape, for example this is red and this is orange so I know I have to give two examples, one on the red and another on the orange, to let the system know that I want to change all the shapes no matter what the color is.” (PB2)
“If you want to change the colors for all slides, you have to review all the slides first. For me it should be: first review all the slides, and then pick all the different parts… different stuff in the same parts you want to change, and then change the different properties, and go ahead and FlashFormat and it should work.” (PB8)
Overall, this suggests that repeated feedback from applying the LGG of self-selected formatting examples is sufficient to promote self-discovery of the optimum example-giving and generalization strategy. Our questionnaire results also support this, showing that participants agreed that they could provide examples and anticipate the effect. Thus, we concluded that HB-1 and HB-2 are also supported.
Nevertheless, the learning process could still benefit from additional guidance. One participant highlighted how before starting new examples, they “always forget to press the button” (PB12). Another participant described how while giving examples, they “sometimes feel lost about what to do next -- should I choose more examples, or should I apply FlashFormat first?” (PB1). Finally, both before and after applying FlashFormat-all, several participants reported the need for better feedback about which objects either would be or had been changed. Suggestions for improvements included highlighting the scope of objects that would be changed next, highlighting the differences of objects that share some attributes with the selected object to guide example selection, learning from multi-object examples where the relative changes are important from a graphic design perspective, and removing the need to press a start button, either by showing a history of recently edited attributes that will be applied or suggesting multiple transformations from a single example.
Even without these suggested changes, the approach fares favorably against alternatives. Compared with the PowerPoint Format Painter, it was found to “work more efficiently” (PB3) and to be “much faster” (PB12) and “much more powerful” (PB4) because it can “work globally” (PB4) and “between slides” (PB5) without overwriting all attributes (PB4). FlashFormat also “has some features Slide Master does not provide” (PB7), such as the ability to work on groups of objects that do not share the same location. In this respect, FlashFormat also surpasses the restyling power of StyleSnap, in which object groups are formed only by shared extents.
**OVERALL DISCUSSION**
We presented two complementary tools for quickly making global changes that can improve the visual consistency of slide decks. We now synthesize the limitations, lessons, and future work from the design and evaluation of these two tools.
One limitation of the current work is our focus on the design of slide visuals only, and not of the underlying presentation material [23] or narrative structure [31]. While the balance of presentation preparation time should arguably be in favor of content and story, any time saved on styling slide visuals could conceivably be transferred to these other activities.
Another limitation is our sole use of PowerPoint for corpus analysis and prototyping. Since all slides and slideware are structurally similar, we expect our proposed solutions to generalize to other slideware and their associated slides. As with generalization to other domains (e.g., vector-based graphical editing), this has not yet been demonstrated.
We have also already noted that StyleSnap does not preserve size and spacing constraints within a slide. Constraint-based reasoning as used for single layout beautification [41] could provide a solution. Similarly, a limitation of FlashFormat is that imperceptible differences in attribute values are still viewed as differences by the system, leading to potential mismatches between user expectations and action outcomes.
Finally, we have evaluated our two tools independently rather than as a single system, such that we may evaluate their individual value. We hope to combine the functionality of the two tools into a single system in future work, taking into account the lessons discussed next.
**Lesson 1: Suggest generalizations from single edits**
One of the advantages of StyleSnap over FlashFormat is that users can confidently edit all of the objects in a position group at once rather than after giving several examples. One of the advantages of FlashFormat over StyleSnap is that object groups can be formed from any set of shared attributes, not just edge positions. Future work should investigate how to achieve both high predictability and flexibility from single examples. This could be achieved by suggesting multiple candidate transformations after each object edit, or suggesting snapping results for attribute values beyond edge positions (e.g., to create small, consistent sets of colors and font sizes) after each object selection. Users could thus make progress by confirming the desirability of candidate edits.
**Lesson 2: Support state preservation as well as propagation**
In both StyleSnap and FlashFormat, there were times when the user already had an example of their desired end state but were forced to recreate it for the benefit of the tool. In StyleSnap, this was because the snapped object groups represented the modal values of the clustered edges rather than the extent of a specified object. In FlashFormat, this was because only the edited attributes of an object contribute to the inferred example, not any other existing attributes. Future work should explore how to give examples of both the “change to” and “keep as” variety on a per-attribute basis, without requiring an enumeration of all object attributes.
**Lesson 3: Show the scope of prospective changes**
In StyleSnap, the real-time feedback from the combination of object highlights and the Selected slide section gave users confidence in the scope of their changes before and as they were making them. However, this feedback also created much visual noise for dense slides and much scrolling of the slide list for large numbers of selected objects. As literature suggests the importance of feedback in this type of systems [29], future work should explore alternative feedback strategies for tools like both StyleSnap and FlashFormat.
**Lesson 4: Incorporate design patterns and principles**
StyleSnap makes the layout of objects consistent across slides, but it does not give any guidance about the desirability of those layouts. Similarly, FlashFormat can make large-scale changes easily, but provides no feedback about the desirability of those changes (e.g., on the contrast between text and its background image following a global change to
the color of overlaid captions). Future work should explore how to resolve aesthetic issues through assisted layout and styling that considers factors such as visual balance [25] and mood [21]. Supporting consistency not just within sets of user-created visuals, but with external design patterns and principles, remains a significant research challenge.
REFERENCES
|
{"Source-Url": "http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.703.3674&rep=rep1&type=pdf", "len_cl100k_base": 11626, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 44293, "total-output-tokens": 13956, "length": "2e13", "weborganizer": {"__label__adult": 0.0006589889526367188, "__label__art_design": 0.02301025390625, "__label__crime_law": 0.0005512237548828125, "__label__education_jobs": 0.062164306640625, "__label__entertainment": 0.0007152557373046875, "__label__fashion_beauty": 0.0005784034729003906, "__label__finance_business": 0.0016269683837890625, "__label__food_dining": 0.0005621910095214844, "__label__games": 0.0016870498657226562, "__label__hardware": 0.00252532958984375, "__label__health": 0.0007343292236328125, "__label__history": 0.001190185546875, "__label__home_hobbies": 0.0006361007690429688, "__label__industrial": 0.0009279251098632812, "__label__literature": 0.001346588134765625, "__label__politics": 0.00036215782165527344, "__label__religion": 0.0008664131164550781, "__label__science_tech": 0.1170654296875, "__label__social_life": 0.0004878044128417969, "__label__software": 0.25732421875, "__label__software_dev": 0.5234375, "__label__sports_fitness": 0.0003879070281982422, "__label__transportation": 0.0005698204040527344, "__label__travel": 0.0004944801330566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59607, 0.03855]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59607, 0.35908]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59607, 0.89073]], "google_gemma-3-12b-it_contains_pii": [[0, 5976, false], [5976, 12214, null], [12214, 18390, null], [18390, 24136, null], [24136, 30912, null], [30912, 36743, null], [36743, 41711, null], [41711, 47689, null], [47689, 54050, null], [54050, 59607, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5976, true], [5976, 12214, null], [12214, 18390, null], [18390, 24136, null], [24136, 30912, null], [30912, 36743, null], [36743, 41711, null], [41711, 47689, null], [47689, 54050, null], [54050, 59607, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59607, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59607, null]], "pdf_page_numbers": [[0, 5976, 1], [5976, 12214, 2], [12214, 18390, 3], [18390, 24136, 4], [24136, 30912, 5], [30912, 36743, 6], [36743, 41711, 7], [41711, 47689, 8], [47689, 54050, 9], [54050, 59607, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59607, 0.17409]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
01fe918641f38543ab892794db524b62f0289699
|
Middleware
Readings:
Middleware: Chen, Welch Self-stabilizing dynamic mutual exclusion paper
Dolev, Schiller, Welch Random walk for self-stabilizing group communication
Virtual objects Dolev, Gilbert, et al. Geoquorums paper
Chatzigiannakis, Nikoletseas, Spirakis. On the average and worse-case efficiency of some new distributed communication and control algorithms for ad hoc networks.
Chatzigiannakis, Nikoletseas, Spirakis. An efficient communication strategy for ad-hoc mobile networks.
Chatzigiannakis, Nikoletseas, Spirakis. An efficient routing protocol for hierarchical ad-hoc mobile networks. Next:
1 Last time: Self-stabilizing dynamic mutual exclusion
From Chen, Welch: Self-Stabilizing Dynamic Mutual Exclusion for Mobile ad hoc Networks, started last time.
Goal is to produce a mutual exclusion algorithm that runs on a very badly behaved dynamic network.
With continual “churn”, as before.
Also self-stabilizing, that is, it should tolerate occasional total system state corruption failures, and manage to recover back to normal operation, before too much time elapses.
I found the paper quite confusing. Many ideas intermingled, presented in a confusing order. Still, there’s some ideas worth seeing.
Main protocol ideas:
1. Processes interested in accessing the critical region formally join a group of “active” processes, and when they are no longer interested, they leave the group. While in the group, they keep getting chances to enter the critical region.
2. Use a token-circulation sub-protocol (similar to the LR protocol of Monday’s paper) to circulate a token around the currently active processes (members of the group), and use this token to control access to the critical region.
3. Mechanisms to achieve self-stabilization:
(a) Send messages, tokens repeatedly.
(b) Enforce bounds on sizes of variables (e.g., counters).
(c) Use timers, to time out old information.
1.1 System model
Digraph network (this is different from the others we’ve been considering, which have been undirected—I’m not sure what the significance of this change is).
Point-to-point communication.
Assum$_0$: Distinguished processor $p_0$.
Assum$_1$: Known upper bound $N$ on number of processors; Processors have uids in $[1, ..., N]$.
Assum$_2$: Known upper bound $d$ on message delay between neighbors, for messages that are actually delivered. But messages can be lost.
Notice that this is the first place where we’ve seen the use of time in this series of papers. It’s because of the self-stabilization.
Assum$_3$: Upper bound $g$ on number of messages generated by a processor in each unit time interval.
(Maybe they don’t need this assumption: Since processors have clocks or timers, perhaps they could just schedule their sending steps to satisfy this condition.)
And variables are all bounded-size (needed for self-stabilization).
The combination of assumptions ensures that, in stable operation, the total number of messages in the system is bounded; this makes it reasonable to assign messages ids from a bounded range.
Clocks:
They assume timers that decrease at the rate that real time increases.
They could just as well have said that they have logical clocks that increase at the same rate as real time, though the values aren’t necessarily synchronized.
Anyway, they assume that they can set timers, and detect when they expire; timeout expiration is used to trigger activities, just like other kinds of inputs.
They consider local computation events and topology change events.
A topology change event instantaneously changes the topology; however, they make no assumption about notifications.
Processes will presumably have to execute sub-protocols to decide who their neighbors are.
Assum$_4$: At each processor no more than one event is activated at the same time.
And the local computation time is negligible.
The time a processor stays in Crit is negligible (this is a funny assumption—but they say later that it isn’t needed—so why include it?).
Executions: The usual kind of definition, except that they don’t have to begin in an initial state. Because the paper deals with self-stabilization, so any state can be the initial state of an execution.
1.2 The problem
Their variant of mutual exclusion involves join and leave requests, which announce that a process is or isn’t interested in the critical section. This suggests that it can remain a member of the group if it wants to submit a new request immediately after it leaves the critical region.
They want mutual exclusion, but not always: just after stabilization occurs.
Eventual stopping:
Any processor that becomes inactive (its last request is to leave, or it never requested to join in the first place), eventually stops entering Crit.
No deadlock: Some active process enters Crit infinitely often, if the execution is infinite.
(The above isn’t quite stated correctly: They define a process to be “active” if it eventually stop submitting join/leave requests and the last request was a join. But what about infinite executions in which each process submits join and leave requests infinitely many times? Then there are no active processes, so the no-deadlock property is automatically false.)
They list other, stronger progress properties:
Lockout-freedom (no starvation), or bounded waiting.
(These do not have the same problem in their statements, since they involve a particular process i that is assumed to be active.)
They claim that they can achieve any of the three progress properties, based on what they assume about the underlying token circulation algorithm.
1.3 Background: Dijkstra’s and Varghese’s algorithms
Their starting point is earlier work, by Dijkstra and by Varghese, on self-stabilizing mutual exclusion in a static ring of processes.
Dijkstra’s is for a static shared memory model, Varghese’s for a static network with FIFO links.
In these algorithms, processes use a local checking rule to decide when they can enter the critical section.
Namely, the token carries a value.
Process $p_0$ checks that the token’s value is equal to the one it remembers in its current state, whereas everyone else checks that it is different.
When $p_0$ leaves the critical section, it increments the token’s value and sends it to the next process.
When anyone else is done, it just sends it along unchanged.
They would like to adapt these algorithms to their dynamic network setting. Problems:
1. The simple local check used by D and V fails in the current situation, because of the facts that links can be non-FIFO.
2. They don’t have fixed ring—they have to construct a virtual ring, and that will change dynamically, based on changing topology and also join/leave requests.
3. Partitions—they don’t want a process that gets disconnected from $p_0$ to prevent others from entering Crit.
1.4 The algorithm
1.4.1 Some motivation
To achieve self-stabilization, the distinguished process $p_0$ keeps generating mutual exclusion tokens (m-tokens), periodically.
In case one has been lost.
That’s dangerous for achieving mutual exclusion: In token-based mutex algorithms, there’s generally just one token, and only the token-holder can enter the critical section.
So they need to do something else here:
Tokens are issued with id numbers, and processes perform a local consistency check of their own state against the id number in the token.
In the “normal case”, when the system is stabilized, the check will ensure that only one process will enter Crit at once.
Of course, while the system is unstable, anything can happen— more than one process can enter Crit.
The check they do will be something like D and V’s.
1.4.2 Some details
Two kinds of tokens: Join-request tokens (j-tokens), and Mutual exclusion tokens (m-tokens)
Both kinds are routed using LRV, which is claimed to be similar to LR, but self-stabilizing.
LRV doesn’t sound all that similar to LR:
It uses bounded timestamps, and enforces a lifetime on tokens, by discarding any token that has been forwarded more than a certain number of hops.
How do they set the number of hops?
It’s based on the time that it takes a token to make one pass through the active processes in the network, giving everyone a chance to enter the critical region.
So it sounds like the lifetime of a token is really just one pass through the active processes in the network—it’s not circulated repeatedly as in the original LF paper.
Tokens get ids, from bounded range, incremented modulo the bound.
Execution of algorithm is divided into phases.
In one phase, the m-token is supposed to pass through all the active members exactly once.
$p_0$ maintains information about the membership, in two variables:
ring: the actual members, and
new − set: the processes that are currently trying to join (from which $p_0$ has recently received j-token messages).
Membership gets updated only at the beginning of a phase.
In each phase, $p_0$ repeatedly generate m-tokens carrying the same ring and new set info. Processes in new set initialize their local states upon receipt of an m-token (they will receive it since the routing is controlled by the underlying LRV protocol, which should visit everyone). They say that the processes in ring are somehow visited in the order specified in ring.
(But I don’t understand this—I thought that LRV was being used to determine the order of token visits, not the order in ring. It sounds like they use LRV, but they allocate the critical section in the order given by ring.)
Anyway, when a process is visited, it checks its local state against the token to see if it really has access to Crit.
The rule it uses here is given in lines 9.13-9.15 of the code, and discussed in the middle of p. 16.
The rule is supposed to be:
For $p_0$, the token’s id should be the same as $p_0$’s current id; for others, the token’s id should be one more than the process’ current id.
Moreover, the process has to be the first one on ring.
When a process wants to join, it sends j-tokens (repeatedly) to $p_0$ (somehow).
When a process wants to leave, it just says so by piggybacking the info on every m-token that visits it.
$p_0$ start a new phase:
When it gets the token back indicating that all the members in ring have gotten access to Crit on the previous phase.
Or, when a (nice, long) timeout expires (long enough for everyone to have gotten access if things are behaving normally—note that this requires a bound on the time in the critical section).
At that point, $p_0$ updates ring to include the new processes in new set that have already initialized their states, and to exclude those who requested to leave.
$p_0$ gets to pick the ordering of the new processes that it adds to ring.
(It does this based on the order the m-token happened to traverse; that ordering was determined by the LRV algorithm. But of course there is no guarantee that the next time, the LRV algorithm will send the token along the same path! This is confusing...) new set is now updated to be all the new nodes from which $p_0$ has received j-tokens during the previous phase.
Those are the main ideas. The rest seems like details, confusing.
1.5 Conclusions
A key piece, self-stabilizing leader election, isn’t here. How to do this?
2 Self-stabilizing group communication using random walks
From Dolev, Schiller, Welch: Random walk for self-stabilizing group communication in ad hoc networks.
2.1 Overview
Puts together many ideas from different places.
They assume a changing undirected graph, like (most of) the previous papers in this set.
They assume nodes learn about their neighbors (atomically at both ends?).
They use the graph to perform a random walk, sending an “agent” around, randomly choosing each successive step from among the set of neighbors.
The agent is regarded as an active entity, but that really doesn’t seem to matter for these applications—here, it acts just like a message—a repository for some data.
Anyway, they identify a subclass of “nice” executions: those in which there is just a single agent, and it happens to arrive at every processor in the system within at most $M$ moves (for some value $M$ that is fixed, as a known function of the network size).
They use this “nice” abstraction to split the problem into two pieces:
1. Eventually achieving a nice execution, starting from an arbitrary configuration.
2. Using the nice execution to solve some interesting problems.
The interesting problems they choose are:
Group membership maintenance.
Group communication (group multicast).
Resource allocation (mutual exclusion).
They attempt to achieve nice executions using random walks.
However, this doesn’t always work—some patterns of change in network topology can prevent any strategy from yielding a nice execution.
But they identify a few cases where it does work, and give some bounds on $M$ that work well in those cases.
To achieve nice executions, they basically have to create new agents if none exist, and throw out duplicate agents if more than one exist.
Creating new agents:
Use a timeout. A process that doesn’t see an agent for a long time creates one (but many might do this—but then the duplicate removal should take care of these).
Removing duplicates: Whenever agents collide, remove them all and start up a new one in their place.
Assuming nice executions, they can implement group membership:
Have everyone set a flag saying whether they want to be in the group.
The agent wanders around collecting information from these flags, and forming a new group (new
viewid, known members) each time it sees a change in the membership requests.
The agent keeps track of the time it last saw a membership request from each process, and times
the process out, throwing it out of the group, if it doesn’t see another one for a long while.
Also, it removes a member if it revisits the member and sees the flag unset.
Using token circulation and group memberwhip, then implement group communication:
By letting group members attach messages to the agent, in order.
Every member of the group that the agent visits can then read the latest messages.
After a while, the agent can delete the messages (after they get old enough).
Finally, they use token circ and group membership to implement mutual exclusion:
Let the agents carry around a queue of requests from the members for the critical region.
The resource gets granted to the first node on the queue.
When the node finishes with the resource, it must wait for the agent again in order to remove itself
from the queue (?)
Those are the key ideas. Now, for a few details:
2.2 Introduction
The algorithm is “self-stabilizing”, which means that it can be started in any configuration, and
eventually will start working right.
Here, that means that eventually (with high probability, anyway) it will start acting like a nice exe-
cution, which then guarantees that it gives the correctness properties needed for the other problems.
Group communication and group membership:
Well-studied abstractions for programming changing networks.
Originally designed for networks that didn’t change very frequently.
However, mobile networks are subject to more frequent changes.
(But, which processes want to belong to the group might not change so quickly—and that may be
a key determinant of the performance of this algorithm.)
Group membership:
Nodes decide to join/leave a named group.
The group changes, forming a succession of “views”.
Each view = (viewid, members)
Group multicast:
Each member can send a message to the group.
It should be delivered to all members of the group (often the requirement says “while they are in
the same view”, but that extra requirement doesn’t seem to be used here).
The order of delivery should be the same everywhere, for those messages that are actually delivered.
Designed for a fairly rapidly changing mobile network.
Flooding isn’t good (too many messages).
TORA-like structures aren’t too good either—the system may change too quickly to allow effective
maintenance of the structures.
They also compare with “compulsory algorithms”, which we will study in class 21.
This algorithm doesn’t require any flooding, doesn’t build any structures, and doesn’t require any compulsory motion.
2.3 The system settings
\( n \) processors, \( n \leq N \), upper bound \( N \) known unique ids
Agents are something like processes, but are sent from process to process like messages. When an agent is at a process, the process can execute some of its steps, then send it on to a neighbor.
Reasonable definition of execution. Can start in an arbitrary state (since they are considering self-stabilization).
Nice execution: A single agent; visits every processor in at most every \( M \) consecutive moves.
2.4 Random walks of agents
Choose the next node randomly, uniformly, from among the neighbors.
Ensure a single agent as described above:
Using timeouts to start up new agents (if you haven’t seen any for a while).
Detecting collisions and discarding all but one.
Impossibility result:
Assume that we do have only one agent.
Even so, if the topology changes are bad enough, we can’t guarantee that the agent will visit everyone, using the assumed random strategy or any other.
The impossibility result is based on a nice example: 2 moves back and forth between 1 and 3, always keeping the graph connected.
The agent visits 1 when 2 isn’t there, and similarly for 3. So the agent never visits 2.
So they have to rely on some special properties of the topology and topology changes. They list three that seem to work, but this looks fairly sketchy and preliminary:
1. Fixed communication: That is, a random walk of a fixed \( n \)-node graph. Then known results say that it takes time \( \mathcal{O}(n^3) \) to reduce to a single leader and likewise \( \mathcal{O}(n^3) \) for a single leader to visit the entire graph.
2. Randomly changing graph: Always connected, but changes completely in between two successive moves of the agent. They it’s essentially a random walk on a complete graph. Then they get \( \mathcal{O}(n \log n) \) for both times above.
3. Neighborhood probability. ??? This is unclear.
2.5 Membership service by random walks
Group membership is described for nice executions, with two requirements:
4.1. For every $i$, if $g_i$ (the flag set by process $i$ to indicate that it wants to be a member) has a fixed value (true or false) throughout the execution, then eventually $i$ is a member of the current view recorded in the unique agent iff $g_i = \text{true}$.
That is, if $i$ consistently says it wants to be a member, then it is in all views from some point on, and if it consistently says it does not want to be a member, then it is in no views from some point on.
Notice that the view is allowed to change, though.
4.2. If every $g_i$ has a fixed value throughout, then eventually the view becomes stable (stays fixed at some (viewid, members) pair).
Now we describe the group membership algorithm.
We have to say what it does when started in an arbitrary state.
The algorithm will ensure that the execution eventually becomes nice (has a nice suffix); however, it doesn’t have to start out nice—so we have to say what it does even when it isn’t nice.
The agent carries a view around: (viewid, membership).
Also, for each member, the agent has a counter value.
Whenever the agent visits $p_i$, and $p_i$ wants to be a member, its counter gets set to a ttl (time to live) constant.
This counter is then decremented whenever the agent is received by any processor.
$p_i$ remains an active member of the group as long as its counter is strictly greater than 0.
Now, before the execution becomes nice, there can be no agents or more than one.
If any processor $p_i$ doesn’t see any agent for too long, it just creates one, initializing it with a new view with members = $\{i\}$.
If two or more agents, each with its own view, arrive at a processor $p_i$, it kills them all, and starts up a new view, again with members = $\{i\}$.
Whenever a processor $p_i$ that holds a single agent discovers that the set of members has changed, it forms a new view with a new viewid and the new membership.
The change can result from the current process changing its request flag $g_i$, or some process timing out (count reaches 0).
Lemma 4.3. says that, if the execution is in fact nice, then Properties 4.1 and 4.2 of group membership hold.
This doesn’t seem hard to believe, informally: a traversal collects all the correct information, and forms new views containing anyone whose $g_i = \text{true}$. If all the $g_i$’s are fixed, then the view never changes again (after an initial traversal that collects all the correct information. LTTR.
The protocol also ensures that eventually the execution becomes nice.
2.6 Group multicast
They describe two common token-based approaches to group multicast:
1. Token circulates, carrying a sequence number for messages, which processes assign to particular messages and then increment.
2. Messages themselves are put into the token; the order determines the order of delivery that everyone sees.
Here they use the second approach: Maintain a queue of messages in the agent state.
Group communication algorithms in the literature make various different communication guarantees.
Here they require (in nice executions):
5.1. If $p_i$ is a member of every view in the execution, then any message sent by a member of the group (any view) during the execution is eventually delivered to $p_i$.
5.2. Same order everywhere, for the messages that are delivered.
To achieve these goals, they let the agent accumulate an ordered queue of all the messages that any visited process wants to multicast.
They keep the message queue length bounded, by throwing out old messages after long enough has passed that they should have been delivered. (? I’m not sure this is quite what they are describing—what about the case where one view persists forever? Its messages should be cleaned up after a while, but they don’t seem to say that.)
2.7 Resource allocation
Mutual exclusion, really.
They assume that anyone who gets the resource releases it within some known time.
They require mutual exclusion and no-lockout.
They show how to build this using the basic agent traversal and the group membership service (not the multicast service).
The basic idea seems to be that everyone who wants the resource joins a group $g_{resource}$.
Then the agent orders the members in the order in which they join the group.
And the agent allocates the resource to the process that is at the head of the request queue.
The process presumably learns about this when the agent arrives with itself at the front of the request queue.
When the process is done with the resource, it simply leaves $g_{resource}$.
Also in some other situations, the resource gets released, e.g., when the process who has it leaves the system (?), or certain types of partitions occur (see the notes about “primary components”).
3 Virtual Objects
[[[Seth to provide something here.]]]
4 Compulsory protocols overview
We have four papers on a common theme—compulsory protocols. The first paper briefly introduced the idea, along with two non-compulsory algorithms. The other three, by the same three authors, develop the idea in several directions. The contents of the papers overlap quite a bit.
A compulsory protocol, for an ad hoc network, is one in which some or all of the mobile nodes move as directed, under algorithmic control, rather than just going where they would like.
They focus on compulsory protocols for simple message routing.
4.1 The network model
They consider $n$ mobile nodes moving in 3D space, with uids.
They consider the space partitioned into $n$ cells that they call “cubes”. Let’s call them “cells”. Cells are chosen to be small enough so that, if a mobile host transmits while it is within a cell, its message is guaranteed to be received by any other hosts in the same cell. Transmission delays are regarded as negligible.
They construct a network graph $G = (V, E)$, where the vertices $V$ are the cells, so $|V| = n$. And $E$ gives adjacency relationships between cells.
In this way, they abstract away from the particular geometry.
They assume that the cells are arranged in a regular pattern in 3-space, such as a grid. Moreover, the degree of the graph is bounded by a constant; for a typical 3D grid, the number of neighbors would be 6, one per face.
That implies that $|E| = \epsilon$ is linear in $n$.
They intend that $n$ should approximate the ratio between the volume of the total space and the volume covered by the transmission radius of a single mobile node. $n$ is a good measure of the size of the space.
4.2 Compulsory protocols
They classify MANET protocols as:
- Non-compulsory: The mobile nodes travel anywhere they like.
- Compulsory: The mobile nodes travel where the algorithm says they should.
- Semi-compulsory: A subset of the mobile nodes, called the “support nodes”, or just the “support”, travel according to algorithm control; the non-support nodes go where they want.
Having some nodes under algorithm control can help achieve needed connectivity, help with tasks like message delivery.
Related work: Li and Rus: Algorithm for message delivery. Forces all the mobile hosts to deviate slightly from their planned trajectories.
So, in CNS terms, their protocol is compulsory, but it limits the moving directions that the algorithm gives to the mobile nodes.
Their algorithm works for “deterministic host routes” (meaning?)
They also show an optimality result for message transmission times.
Another influence: The “two-tier principle” articulated by Imielinski and Korth in their early book on mobile computing.
It says take advantage of (move computation and communication to) fixed parts of network whenever possible.
They regard the support as analogous to a fixed part of the network.
For this group, the idea of compulsory protocols seems to have arisen in the first paper, by Hatzis et al., as an afterthought to a non-compulsory algorithm. So we’ll cover this result next.
5 A non-compulsory protocol for leader election and node counting
From Hatzis paper.
The problem is to elect a unique leader among the $m$ mobile nodes.
It should learn it is the leader, and others should learn they are not the leader.
Moreover, the leader should learn the total number $n$ of mobile nodes.
In addition to the model assumptions listed earlier, they also assume the nodes know their geographical location, and hence, can tell which cell they are in.
5.1 The protocol
Very simple.
Every mobile node keeps a local counter, initially 1.
Whenever two nodes meet, they engage in a pairwise protocol, in which the one with the higher id wins and the other loses.
The winner remains active, whereas the loser becomes inactive.
The winner absorbs the loser’s count by adding it to its own count.
They also augment the protocol to keep track of lists of ids of nodes that have been collected, instead of just the count. Of course, that leads to larger messages, hence more communication.
For some (not completely convincing) reason, they restrict the nodes so they participate in such an encounter only when the nodes “enter a new cell”.
The idea here is to conserve battery power, but I’m not exactly sure how that would work: Should the nodes stay in sleep mode except occasionally, when they first enter a cell?
How would two nodes know when to wake up in synchrony, then?
Protocol correctness seems obvious, as long as messages can’t get lost, nodes can’t fail, etc.
However, the protocol has no resilience at all.
5.2 Protocol analysis using the random walk assumption
Now they want to analyze the expected time for the algorithm to complete. Of course, it’s impossible to bound this, because it depends entirely on how the nodes are moving. So, you see, they would like to be able to tell the nodes how to move...so they could obtain some guarantees.
But here, for their analysis, they simple assume that each mobile node performs an independent random walk on the network graph $G$. They claim that this is not such an unreasonable assumption, given that lots of other papers (esp. systems papers) assume the random waypoint model (which is a kind of random walk).
The analysis looks quite straightforward, based on Markov chain analysis techniques. First, they observe that they can restrict attention to the case where $m$, the number of mobile nodes, is exactly equal to $n$, the number of cells. This is OK because:
1. If $m < n$, they could augment the algorithm with dummy nodes, thereby bringing the number of nodes up to exactly $n$. And, they could assign all these nodes ids strictly less than all those of the real nodes. Then the dummy nodes would not affect the behavior of the real nodes in the protocol. So, the time for the real $m$ nodes to complete is no greater than the time for all $n$ real + dummy nodes to complete.
2. If $m > n$, then some of the $m$’s start out in the same cell. They will meet immediately and become reduced to one node, thus reducing the cost to that for the cases where $m \leq n$. So, restrict attention to $m = n$.
Lemma 3.2. calculates a bound on the probability that $M_{i,j}$, the meeting time of two mobile nodes that start in cells $i$ and $j$, respectively, is bigger than some given time value (real number?) $t$.
The analysis says that that decreases exponentially in $t$; specifically, they get $1/e^{t/em^*}$.
Here, $e$ is the usual constant. And $m^*$ is the maximum expected value of $M_{i,j}$, where the maximum is taken over all pairs $i, j$.
So all they seem to be doing here is relating the value $M_{i,j}$ for any particular $i, j$ to the expected value of $M_{i,j}$ for all $i, j$. Well, OK...
Theorem 1 does a bit more: It bounds the expected time to finish, in terms of the maximum, over all $i, j$, of the expected time $E_i(T_j)$ for a host that starts in cell $i$ to reach cell $j$. Specifically, they bound the expected time to finish by $O(\log(n)\max_i E_i(T_j))$.
They use a series of equations, LTTR. Then in Corollary 3.1, which they give without any proof, they bound this further in terms of the number of edges in the graph. Specifically, Corollary 3.1. says that the expected time to finish is $O(\log ne)$, that is, the log of the number of cells times the number of edges.
The key step here seems to be the claim that for every $i$ and $j$, $E_i(T_j)$ is bounded linearly in terms of the number of edges.
Sounds plausible—maybe standard for random walks—but no proof here.
They also tighten their bound, in Theorem 2.
Apparently Theorem 1 allowed for the winner to meet all the other hosts one at a time.
But in most executions, the time it takes to meet the other hosts would overlap.
Taking this parallelism into account, they show that the expected time to finish is actually just $O(\epsilon)$, or $O(n)$ (linear).
Finally, they add termination detection, based on timeouts.
This implies that the nodes have clocks, and that they know when the protocol is likely to have completed.
5.3 Bells and whistles
Now they think of the idea of compulsory protocols.
Instead of assuming the nodes walk randomly, they might assume that the algorithm is allowed to tell the nodes where to go.
And of course, what it would tell them to do is to walk randomly.
Then of course the previous analysis results apply.
Anonymous networks:
If nodes have no ids, they can choose them randomly.
With high likelihood, they will choose different ones, and then the protocol works as before.
If two choose the same, then when they meet, they can “refine” their choices to choose different ones.
(E.g., they could add lower-order random choices, rather than choosing entirely new ids, and use lexicographic ordering on the pairs.
This seems like a good idea, because otherwise, they could ruin the property that the largest id wins (which could be interesting, though it’s not part of the problem requirements.
Simulation results: Nothing interesting—they just back up the theoretical results.
Further work: They suggest using similar strategies (by which they must mean random walks, and might mean compulsory protocols) for other problems like routing, coordination, termination detection, failure detection.
6 Semi-compulsory protocols for message routing
Working from the other three papers. They present a generic semi-compulsory protocol idea, then specialize it in two ways: to a Snake protocol and to independent Runners.
6.1 Some motivation
Problem definition: Send a message from some sender mobile node $S$ to a receiver mobile node $R$.
In some places, they also mention that they want to notify $S$ that its message has been delivered;
but then they seem to forget about this, so we will too.
They make a breezy claim that “No distributed algorithm can be implemented in ad-hoc mobile networks without solving this basic communication problem.”
I’m not sure what they mean here; it suggests some impossibility claim...but it can’t be right, in general...consider sensor nets in which the individual nodes have no importance—just the data they are collecting about the real world environment.
They discuss previous solutions to message routing in MANETs (DSR, AODV, TORA, LAR,...).
Some problems with these:
— Some require flooding.
— They require constructing and maintaining data structures, which might not work well if the network is changing rapidly.
— They are expensive in terms of communication.
They conjecture a kind of “impossibility result”:
Any algorithm that tries to maintain a global structure with respect to the temporary network will be erroneous if the mobility rate is faster than the rate of updates of the algorithm.
Well, at a certain level, this sounds right; can we turn this into an actual impossibility result?
So they are looking for a better approach:
— Should work very well in a rapidly changing network.
— Require little overhead.
— Use local information only.
— Deliver messages fast.
For all of this, they are willing to require certain designated nodes to move under algorithm control.
6.2 The generic protocol
Support = the nodes whose motion is controlled.
They assume some number \( k \) of support nodes.
In general, the support nodes move somehow through the network graph.
CNS abstract this behavior into a “support motion subprotocol” P1.
When a sender \( S \) is near a support node, it gives its message to the support node (using another protocol P2).
The message is stored “somewhere within the support structure”.
When a receiver \( R \) is near a support node, it gets notified about a message waiting for it, and gets the message.
How are the messages managed within the support (that is, among the support nodes)?
They are propagated among the support nodes when two or more support nodes come within communication range.
How the support nodes exchange and maintain this information is controlled by a “synchronization subprotocol” P3.
So, in general, the support is some kind of moving skeleton subnetwork. They leave things flexible, allowing different protocols for P1, P2, P3.
6.3 The Snake protocol
6.3.1 Protocol description
The protocol involves a collection of \( k \) nodes, for some (parameter) \( k \), with an established ordering. The first node, called the “head”, does a random walk, at each step visiting a neighboring cell chosen uniformly at random. The remaining nodes follow along behind the head, in order. They all move at the same speed (in discrete steps). Thus, after each step, each node occupies the cell that its predecessor did after the last step. (They don’t need common sense of orientation—they just need to be able to choose a random direction, or move to someone else’s cell.)
This is used to send and deliver messages, as described above for the generic protocol:
\( S \) waits until any node of the snake is within range, then transfers its message to the snake. The snake transfers the message among its own nodes, according to some protocol, e.g., full replication.
Since the snake is always connected, these nodes can manage data however they like. Then, when any node of the snake is within range of the message’s target \( R \), it tells \( R \) message for it, and actually delivers it.
They also describe how to set up the snake initially. They assume that the \( k \) support nodes who they are. Somehow they have to coordinate to elect one to be a leader, who becomes the head of the snake. It takes charge of ordering the other \( k-1 \) support nodes into a list, and telling them their positions.
6.3.2 Evaluation
They claim that this algorithm ensures coverage of the whole network (with high probability), within bounded time; they analyze this time, based on random walk analysis results. The analysis does not assume that the non-support nodes are also doing random walks. Rather, they can move any way they like, as long as they aren’t behaving “adversarially”, trying to avoid the support nodes; more precisely, they require that the non-support nodes’ motion is independent of that of the snake. Then just the fact that the snake is doing a random walk is enough to get their guarantees.
We’ll come back to this analysis in a minute, after introducing the other special case: the Runners protocol.
Advantages of the Snake algorithm:
They claim it achieves very fast communication between any two mobile users. With low communication overhead, no elaborate state, simple local processing. The nodes don’t actually use any location information.
Disadvantage:
It requires compulsory motion of the support nodes.
6.3.3 Bells and whistles
Significant modification:
The head does a random walk on just a spanning subgraph of the network graph.
Since the network graph is fixed, it’s possible to define such a subgraph once and for all—it doesn’t
have to be maintained in the presence of changes.
This seems to improve performance, as determined by simulations and also by analysis.
Robustness: They also give a robustness “theorem” for the Snake protocol, but it requires modific-
cations to the algorithm.
Theorem: A revised version of Snake tolerates failure of up to one support host.
The revised version allows the snake to split into two, then when the head H2 of one snake, snake2,
happens to encounter any node of the other, snake1, H2 splices its entire snake2 inside snake1 at
the point of the encounter.
They note that this trick doesn’t work for more than one failure—since a cycle could form.
6.4 The Runners protocol
Simply allow $k$ support nodes to perform $k$ independent random walks, synchronizing whenever
they meet.
Apparently this performs well—some of their experiments say it performs better than the Snake.
Apparently increasing $k$ has a strong impact on the expected delivery time—that seems to make
sense, since there are more changes for $S$ (or $R$) to encounter a support node if they are moving
independently than if they are moving together...
In contrast, increasing $k$ beyond a certain point—around $\sqrt{n}$—doesn’t seem to help much in
the Snake protocol.
They discuss a 2-phase commit protocol, which is used for runners that encounter each other
to exchange their information.
This suggests that they think some kind of consistency will be needed for this synchronization—so
they are using a fairly heavyweight consensus (commit) protocol here.
But it is not clear what consistency guarantees they require and why.
(The commit-style subprotocol is standard: one of the runners that has met (e.g., the one with the
lowest id) takes charge of collecting everyone’s information and then broadcasting it to everyone.)
Robustness:
Theorem: Runners tolerates failure of up to $k - 1$ support hosts.
Because the remaining nodes just continue their traversals as before.
Experiments: They compare Snake and Runners.
Only a small support is required for each.
However, Runners did better than Snake in almost all cases—so maybe it’s just a better algorithm.
6.5 Analysis of the Snake protocol
The best source here is CNS’s DISC paper—it’s on just the Snake protocol.
The major point of the paper is that the non-support nodes can move arbitrarily, “provided that they don’t deliberately try to avoid the support”, and their analysis results still hold.
What exactly does this mean? That they move according to a pre-defined, oblivious deterministic or randomized strategy?
The authors don’t actually say whether the non-support nodes’ movement strategy is allowed to adapt to what they encounter during execution—so I will presume that it can’t.
Thus, I will assume that each non-support node has a predefined walk strategy, that doesn’t adapt at all to what it sees during execution.
To anything, not just to the support’s movement—since I don’t know how to formalize dependence on the support’s movement.
Their analysis results consist of guaranteed expected time bounds for communication from $S$ to $R$—guaranteed for arbitrary motion patterns for the non-support nodes.
These time bounds don’t depend on the number of non-support nodes, nor on the initial placements, nor on the movement strategy for $S$ and $R$—just on the size of the network graph.
Their proofs are rather heavy on the Markov analysis.
In particular, they use a fundamental notion of “strong stationary times” of reversible Markov chains.
They restrict attention to properties of the random walk performed by the head, without worrying about the rest of the support hosts.
(They define:
$p(i, j)$, transition probabilities for the head (probability that, from $i$, it moves directly to $j$)
$P_i(E)$, the probability that the walk satisfies a property $E$, given that it starts at vertex $i$.
$T_j$, first “hitting time” when the walk reaches vertex $j$.
$E_i(T_j)$, expected value of $T_j$, for walks that begin at cell (vertex) $i$.)
In analyzing random walks on graphs, it’s convenient to define $\pi$, the stationary distribution.
Markov theory says that, after a “sufficiently long” time $t$, a walk starting anywhere will reach the various vertices with probabilities that are given by the stationary probabilities (actually, these probabilities approach the stationary probabilities in the limit).
Moreover, the stationary distribution has a nice formula: $\pi(i)$ for any vertex $i$ is just $\text{deg}(i)/2\epsilon$ (recall $\epsilon$ is the number of edges).
(Check that these sum to 1, as they should for a probability measure.)
(This says that the vertices have probabilities of being visited that are strictly proportional to their degrees. More neighbors, more chances of being visited.)
$p_{i,j}(t)$, the probability that a walk started at $i$ will be at $j$ at time $t$.
Now they state their main theorem, Theorem 1:
The statement is a bit confusing.
It seems to be saying, first, that the algorithm “guarantees” communication from $S$ to $R$ in finite
time.
Presumably, they must mean that with probability 1, it eventually succeeds.
But this is a consequence of what I understand to be the second statement, which gives a bound
on the expected time for this communication.
Actually, they don’t give the bound in the theorem statement—they just say that some bound
exists, and that it’s a “function of the motion space size” (equivalently, of the number $n$ of cells).
Proof: I found it confusing—didn’t completely follow.
They analyze the time it would take for a randomly-walking snake head to meet the node $S$.
Then the time to meet $R$ is symmetric.
And they can also add in some time for communicating the message among the snake members.
To analyze the time for a randomly-walking head to meet $S$, then define $EM$, the expected time
of the first meeting.
What is this expectation taken over? It must be over the random choices in the random walk, and
also, any random choices made by $S$ in its walk strategy (this is allowed).
But it is not randomizing over the starting positions for the head and $S$, nor over the choice of
walk strategies $S$ might be using; they want to allow all possibilities here.
So, they define $m^* = \sup(EM)$, taking the worst case expected meeting time over all starting
positions for the head and $S$, and all walk strategies for $S$.
So, fix the starting positions and the $S$ strategy; try to bound $EM$ for this combination.
They again use the stationary distribution $\pi$.
I don’t completely follow this analysis.
So I’ll just try to give a high-level idea of what I think I understand.
Note that the distribution of the head’s position approaches the stationary distribution in the
limit.
They consider approximations to this. Namely, they claim there exists a (sufficiently large) time
$u$ such that, for all vertices $i, j$, the probability that the head’s walk, starting from vertex $i$, is at
$j$ at time $u$ is at least $(1 - 1/e^2)\pi(j)$, that is, it is within $1/e$ of the correct stationary probability $\pi(j)$.
They consider the sequence of times $u, 2u, 3u, \ldots$.
They invoke the independence assumption, top of p. 11, to conclude (somehow) that, at each of
the times in this sequence, the probability that the head is on the same vertex as $S$ is at least
$(1 - 1/e^2)\min_j(\pi(j))$, an approximation to the minimum stationary probability of any vertex.
This is at least plausible: if $j$ is the vertex that $S$ happens to be visiting at one of these times $du$,
then the stationary probability $\pi(j)$ is at least this min expression.
And the random walk by the head has approximately this stationary probability’s chance of being
there at the same time.
(The analysis here seems to be neglecting the approximation issue— I’m not sure if it’s a mistake.)
Since the head has at least probability \((1 - 1/e)\min_j(\pi(j))\) of meeting \(S\) at every time that is a multiple of \(u\), the expected number of multiples of \(u\) that have to elapse before the head meets \(S\) is at most \(1/(this\ expression)\).
Thus, the expected time is at most \((u(e/(e - 1)))/\min_j(\pi(j))\).
Now they define \(c = u(e/(e - 1))\), and thus rewrite the bound as just \(c/\min_j(\pi(j))\).
Since each stationary probability \(\pi(j)\) is equal to \(degree(j)/(2\epsilon)\), it is \(\geq 1/(2\epsilon)\).
Plugging this into their bound expression yields a bound of \(2\epsilon\).
Their final conclusion, restated in Corollary 1, is this upper bound on expected time of \(2\epsilon\).
In interpreting this bound, notice that \(\epsilon\) is a parameter of the graph (number of edges), and so is \(c = u(e/(e - 1))\).
(This depends on the Markov stationary probabilities for this particular graph).
So, the dependency is as they say—only on the graph parameters, and not on the starting locations or \(S\)'s strategy.
But, they don’t give a bound for \(u\) here—just say it’s a parameter of the graph.
But all this just analyzes the expected meeting time between the head and \(S\) (or \(R\)).
It doesn’t say anything about the time to communicate the message within the snake.
So this analysis seems incomplete—surely that involves some extra cost.
Protocol time efficiency properties:
Now they use the previous analysis of the meeting time for the head and \(S\) to get a bound on the communication time from \(S\) to \(R\), taking the entire snake into account.
There are two additions here: getting a better bound on the meeting time, and actually incorporating the bound into a communication cost analysis.
First, consider the meeting time—for the snake to meet \(S\) (or symmetrically, \(R\)).
The snake has \(k\) nodes—so we would expect some improvement in the expected time to meet \(S\), since now it has to meet any one of the nodes in the snake, not necessarily the head.
However, since the motion of the \(k\) support nodes is tightly coupled, the improvement might not be as pronounced as if the nodes were doing independent random walks.
What they get from the larger number \(k\) of nodes is essentially, a dynamically-changing reduced graph.
At any time, the (approximately) \(k\) vertices on which the support nodes reside can be regarded as one “super-vertex” in a reduced network graph.
All of these vertices’ neighbors become neighbors of the one super-vertex.
The super-vertex’s larger degree means it has a larger stationary probability.
Which should translate into a larger chance of meeting \(S\).
They claim they can modify their previous analysis, to now use the randomly-moving super-vertex instead of the randomly-moving snake head.
They seem to be claiming that, for any super-vertex, the degree is at least \(k\) (instead of 1 as before).
So their upper bound on the meeting time now becomes \(2\epsilon k\).
That’s just for the meeting time.
To get the communication time in this setting, they have to add in an \( O(k) \) term—because of the time needed to propagate the message through the support.
So they get something like \( 2(2\epsilon/k) + O(k) \).
They claim to optimize this expression when \( k = \sqrt{2\epsilon} \), yielding an overall bound of \( O(\sqrt{\epsilon}) \).
Spanning tree bound:
Finally, they also claim a (better) bound for the case where the snake traverses a spanning tree rather than the whole graph: \( O(n) \).
Proofs not in this paper, though. LTTR.
6.6 Discussion
They also claim a lower bound on walk time, which looks interesting. But I’ll leave it to the reader—anyway, the proof of a key lemma isn’t here.
The conclusion is that the expected time for hitting \( S \) (for some choice of starting positions and \( S \) strategy) is at least \( (n - 1)^2/2\epsilon \).
Future work: Study some forms of constrained motion for the non-support hosts.
7 Hierarchical routing protocol
From Chatzigiannakis, Nikoletseas, Spirakis: An efficient routing protocol for hierarchical ad hoc mobile networks
And also, the POMC paper.
7.1 Overview
This paper presents an embellishment of the Snake protocol.
They introduce a special-case model for ad hoc networks that is based on areas they call “cities”, connected by long-distance links called “highways”.
In the cities, mobile nodes are dense and move fairly randomly.
On the highways, nodes are sparser but their motion is much more predictable.
The highways are traversed fairly frequently.
They claim that such networks are common.
They address the problem of routing a message from one mobile node to another, where the mobile nodes may be in different cities.
The mobile nodes that live in the cities (including \( S \) and \( R \)), are assumed \textit{not} to travel on the roads—that capability is reserved for special “highway mobile nodes”.
The main idea is to use the snake framework within cities, in order to route a message within the city, between specific \( S \) or \( R \) mobile nodes in the city and a particular “access port location” in the city.
Highway nodes carry the message from the access port of one city to access ports of the other cities.
The access port location is on the highway, and gets visited frequently enough so that, with high probability, a message arriving at the access port from within the city (via a local snake) will be synchronized with a highway node arriving at the port from the highway. They require the other direction too: high probability that a message arriving there from the highway will be synchronized with the local snake, to allow the message to be picked up by the snake and delivered to the right recipient within the city.
Much of the paper deals with the 2-city case; if there are more cities, their protocol involves essentially flooding the message to all cities, where they will be circulated around via their snakes. That is, unless there is some global knowledge of which city a particular destination resides in.
They give simulation results, which they claim show really good performance. However, they are only comparing their protocol to the earlier snake protocol, when run throughout the whole network.
### 7.2 Model of hierarchical ad-hoc mobile networks
They assume 3D.
Dense city subnetworks, each with a “city graph”, which is just like the overall network graph in the Snake paper. In this graph, each vertex corresponds to a “cell”, or “cube”; it must satisfy the condition that anyone in the cell who sends a message is heard by everyone else in the cell.
Each city has a special location called an “access port”, which is its (unique?) point of connection to the highway.
They divide the mobile nodes into two categories: City nodes, who remain within cities, and highway nodes, who only traverse the highways. They assume that highway nodes traverse the highways fairly frequently. Highway nodes are not controllable—no compulsory motion; rather, the protocol should try to take advantage of their predictable motion.
They use a discrete time (slot) notion. They assume a lower bound $p$ on the probability that, at a given time (slot), some highway mobile user is at a city’s access port.
This notion is a little unclear. They say they assume probability $p$ (a constant) that, at any given time, “the exchange of information by the higher layer is available” at an access port.
What does this actually mean? It seems like two different things:
1. A guarantee that, at any given time (slot), some highway node is at the access port (available to receive a message from the support). The reasonableness of this guarantee depends on the density of travel on the highways.
2. A guarantee that, at any given time (slot), the support is at the access point (available to receive a message from the highway node).
This is a different sort of guarantee from the first one above. Its reasonableness depends on the size of the city and the size $k$ of the snake.
They don’t explicitly say this is what they mean. But the only other interpretation I can see is that some mobile node is sitting permanently at the access point. ??
### 7.3 Their protocol
They use a Snake within each city.
For the highways, they piggyback messages on the predictably-mobile highway nodes.
They don’t need compulsory motion on the highways—they just rely on whoever happens to be traveling there.
They claim that the regular (not random) movement on the interconnection highway helps the communication time quite a bit, over random routing via a snake.
More details:
Each city has one snake, doing a random walk as before.
Assume that sender $S$ has a message to send to another node $R$ not in the same city.
(If in the same city, the usual snake protocol will work.)
Then:
1. When $S$ is within transmission range of the local snake, it gives its message to the snake.
2. When the head of the snake arrives at the access port, then if it happens to meet a highway mobile node there (which it does, with probability at least $p$), it hands off the message to the highway mobile node.
If not, then the snake keeps moving randomly until it happens to return to the access port, and keeps on doing this until it succeeds in meeting a highway node at the port.
(So it doesn’t sound as though anyone remains permanently at the access port.)
3. The highway mobile node moves according to its regular movements on the highway, to the other city’s access point. (Here, they assume there is only one other city, though they claim that their ideas generalize to more cities.)
When the highway mobile node reaches the other city, they say that, again with probability $p$, it meets the other city’s snake.
4. The support in the new city delivers the message to $R$, during its usual random walk.
This is for two cities. They also talk about “modularity”, by which they mean that their algorithm extends to any number of cities.
For finding a target node in an unknown location, they simply have the highway node drop off the message at all cities.
### 7.4 Analysis
They try to explain informally why the original snake protocol wouldn’t behave well on the hierarchical network.
I’m not even sure how the snake is supposed to work in this case.
What happens when it reaches an access port? Does it count the highway as one of the adjacent
edges and include it in its uniform random choice of next place to go?
What if no highway node is there at the time? Then choose another direction?
They claim that there is only a small probability that the snake would pass through the access port and head to the right city. They claim that, in contrast, the hierarchical algorithm guarantees, with high probability, that within a small number of visits to the access port, the message will be successfully handed off.
Another reason why the hierarchical protocol should behave better than the pure snake protocol is the much greater opportunities for concurrent processing in the hierarchical protocol. The various snakes can continue their work traveling around cities, collecting and distributing messages, all in parallel, and in parallel with the useful work done by the highway nodes in conveying their messages between cities.
Another reason the highway protocol does better is that it brings the message to all cities, essentially in parallel, and then it gets distributed thought all the cities in parallel, whereas the snake would bring it to only one at a time. Again, a matter of concurrency.
They carry out an average analysis, assuming that all the snakes are performing random walks, and also that the other city nodes are doing random walks.
I’ll skip the bounds—the analysis is somewhat different from the one in the other paper. Using a support of size $\sqrt{n}$ for each city, where the city has $n$ cells, is optimal; with this, they get linear average message delays, linear in $n$.
(In this analysis, they assume the delay on the highway is a constant.)
7.5 Experimental results
For hierarchical graphs, they claim that the hierarchical protocol is much better than the pure Snake protocol.
Their experiments involved one fixed $S$ in one city and another fixed $R$ node in another city. One access port in each city, one highway connecting them.
Though $S$ and $R$ are fixed, they allow $S$ to send many messages to $R$. So, obviously, their algorithm can take advantage of all the concurrency, whereas the original algorithm had very little—only one snake!
For the new algorithm, the key factor in determining expected time turns out to be the probability $p$.
But, $p = .3$ is good enough.
8 Echo algorithms for leader election and counting
This is for the cellular model.
But it might provide ideas for similar algorithms in mobile networks, if we use a sufficiently high
level of abstraction (e.g., use virtual base stations).
The problem addressed here is leader election among mobile nodes. As a secondary, closely related problem, they consider the accumulation of the exact total count of nodes in the system.
They give a simple protocol whereby the fixed network collects the counts using a fairly standard Echo-style protocol.
8.1 Introduction
Usefulness of leader election and node counting:
1. For applications: Inventory applications, animal population monitoring, traffic monitoring,...
2. For network control:
Could use them in constructing routing protocols, managing data, etc., using a layer organization wherein these services are built on top of the leader-election and counting services.
Maybe could also use them for low-level network monitoring and control, e.g., for topology control?
This suggests a layer organization wherein leader election and counting run as “coroutines” with the network control services (since they may depend on each other).
8.2 The model, the problem
Mobile networks with fixed base stations (Mobile Service Stations—MSSs), each controlling a cell. Mobile nodes move around to various cells. While in a cell, a mobile node communicates only with that cell’s MSS.
When a mobile node enters a cell, it sends a join message to the MSS.
They model the fixed network of MSSs as an undirected graph $G = (V, E)$, where $|V| = n$, here the number of MSSs, and $|E| = \epsilon = O(n^2)$. The edges here represent fixed wired links.
They also assume $m$ mobile nodes.
The problem:
One of the mobile nodes initiates the algorithm to find the total number of mobile nodes (or elect a leader).
8.3 The protocol
The protocol works in two “tiers”, with the fixed coordinators at the higher tier, managing the mobile nodes at the lower tier.
The basic organization is “Echo” style, by which they mean that a central root node starts a broadcast on a fixed tree network, followed by a convergecast back.
The particular Echo protocol used here assumes a fixed tree of the wired network of MSSs. It involves four Echo phases.
To begin the algorithm, the initiator mobile node sends a message to its local MSS, telling it to be the coordinator of the algorithm.
The coordinator MSS broadcasts a $\langle \text{count} \rangle$ message in its cell.
1. Echo phase 1:
The coordinator starts an Echo containing a $\langle \text{count}_{\text{ok}} \rangle$ message, along the tree of MSSs.
Each MSS that receives the $\langle \text{count}_{\text{ok}} \rangle$ message continues the Echo by propagating the $\langle \text{count}_{\text{ok}} \rangle$ message along the tree.
In parallel, it broadcasts a $\langle \text{count} \rangle$ message in its own cell.
After the completion of Echo phase 1, every MSS knows about the execution of the counting algorithm and has broadcast a $\langle \text{count} \rangle$ message in its cells.
Now, some activity goes on concurrently with this phase, and continues afterwards.
Namely, when a mobile node receives a $\langle \text{count} \rangle$ message, it responds with a $\langle \text{count}_{\text{me}} \rangle$ message containing its own id.
Each MSS keeps track of the counts it collects in this way, in a variable size.
Also, if a new mobile node arrives in a cell, it sends a $\langle \text{join} \rangle$ message, to which the node responds with another $\langle \text{count} \rangle$ message, so that the new mobile node will have a chance to respond with a $\langle \text{count}_{\text{me}} \rangle$.
Note that no mobile node sends more than one $\langle \text{count}_{\text{me}} \rangle$, ever, thus preventing double-counting.
But then can’t some arrive after this point?
2. Echo phase 2:
Now the initiator MSS sends a $\langle \text{size}_{\text{ok}} \rangle$ message in an Echo wave.
In the convergecast part of this phase, each MSS sends its current value of size up to its parent; when it does so, it no longer sends any more $\langle \text{count} \rangle$ messages.
At the end of Echo phase 2, the initiator MSS should know the total number of nodes in the network.
Questions:
A possible problem: A mobile node might have been missed because it moved from cell to cell during the execution of the protocol.
At what point does an MSS stop accepting new $\langle \text{count}_{\text{me}} \rangle$ messages?
Presumably after it has sent its size value upwards, towards its parent.
But then can’t some arrive after this point?
3. Echo phase 3:
Next, the initiator MSS broadcasts an $\langle \text{inform}_{\text{ok}} \rangle$ message in another Echo wave.
This message contains the determined size.
Each MSS broadcasts this within its cell.
At the end of Echo phase 3, all the mobile nodes, as well as the MSSs, are supposed to know the (same) size estimate for the network.
4. Echo phase 4:
A final phase, in which the initiator informs everyone about the completion of the counting.
The algorithm can be modified to elect a unique leader, e.g., the one with the max id.
Lemma 2.1. says that the algorithm correctly counts all the mobile nodes.
We already know that no one gets double-counted.
So the key issue here is showing that every host in fact gets counted somewhere.
They argue this, though I don’t find the argument convincing.
It doesn’t seem to be an actual proof, but just an example of a particular type of motion.
They talk about a host that moves from MSS S1 to S2, who “finish their execution of the protocol” (what does that mean?) at times t1 and t2 respectively.
They describe one case where a host moves from S1 to S2 (but this is not a general case—just one possibility).
But, to cope with this, it sounds like they are modifying the protocol. So this may be wrong...needs fixing.
HW exercise: Fix this?
Chandy-Lamport-style (or Fischer-Griffeth-Lynch-style) snapshot ideas may be useful here.
They give a nice time bound, linear in the diameter of the fixed network.
They also analyze a more abstract “cost” measure, which turns out to be linear in the number of edges in the fixed network.
|
{"Source-Url": "http://courses.csail.mit.edu/6.885/spring06/notes/lect21-draft.pdf", "len_cl100k_base": 14507, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 60210, "total-output-tokens": 16169, "length": "2e13", "weborganizer": {"__label__adult": 0.000415802001953125, "__label__art_design": 0.0004580020904541016, "__label__crime_law": 0.00034499168395996094, "__label__education_jobs": 0.0009007453918457032, "__label__entertainment": 0.0001800060272216797, "__label__fashion_beauty": 0.00020742416381835935, "__label__finance_business": 0.00034737586975097656, "__label__food_dining": 0.0004656314849853515, "__label__games": 0.0010843276977539062, "__label__hardware": 0.004150390625, "__label__health": 0.000759124755859375, "__label__history": 0.0007653236389160156, "__label__home_hobbies": 0.000186920166015625, "__label__industrial": 0.000827789306640625, "__label__literature": 0.0004639625549316406, "__label__politics": 0.000324249267578125, "__label__religion": 0.0007305145263671875, "__label__science_tech": 0.373046875, "__label__social_life": 0.00012058019638061523, "__label__software": 0.0155181884765625, "__label__software_dev": 0.59619140625, "__label__sports_fitness": 0.0005202293395996094, "__label__transportation": 0.0015048980712890625, "__label__travel": 0.0003998279571533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64963, 0.00979]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64963, 0.40291]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64963, 0.93834]], "google_gemma-3-12b-it_contains_pii": [[0, 2029, false], [2029, 4319, null], [4319, 6829, null], [6829, 8962, null], [8962, 11357, null], [11357, 13649, null], [13649, 16235, null], [16235, 18277, null], [18277, 20907, null], [20907, 23185, null], [23185, 25503, null], [25503, 27792, null], [27792, 30546, null], [30546, 32909, null], [32909, 35169, null], [35169, 37743, null], [37743, 40009, null], [40009, 42909, null], [42909, 45868, null], [45868, 48861, null], [48861, 51111, null], [51111, 53749, null], [53749, 56273, null], [56273, 58844, null], [58844, 60669, null], [60669, 63539, null], [63539, 64963, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2029, false], [2029, 4319, null], [4319, 6829, null], [6829, 8962, null], [8962, 11357, null], [11357, 13649, null], [13649, 16235, null], [16235, 18277, null], [18277, 20907, null], [20907, 23185, null], [23185, 25503, null], [25503, 27792, null], [27792, 30546, null], [30546, 32909, null], [32909, 35169, null], [35169, 37743, null], [37743, 40009, null], [40009, 42909, null], [42909, 45868, null], [45868, 48861, null], [48861, 51111, null], [51111, 53749, null], [53749, 56273, null], [56273, 58844, null], [58844, 60669, null], [60669, 63539, null], [63539, 64963, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64963, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64963, null]], "pdf_page_numbers": [[0, 2029, 1], [2029, 4319, 2], [4319, 6829, 3], [6829, 8962, 4], [8962, 11357, 5], [11357, 13649, 6], [13649, 16235, 7], [16235, 18277, 8], [18277, 20907, 9], [20907, 23185, 10], [23185, 25503, 11], [25503, 27792, 12], [27792, 30546, 13], [30546, 32909, 14], [32909, 35169, 15], [35169, 37743, 16], [37743, 40009, 17], [40009, 42909, 18], [42909, 45868, 19], [45868, 48861, 20], [48861, 51111, 21], [51111, 53749, 22], [53749, 56273, 23], [56273, 58844, 24], [58844, 60669, 25], [60669, 63539, 26], [63539, 64963, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64963, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
280e736afcec32b02fba69d449a276a30460a929
|
Aspect and XML-oriented Semantic Framework Generator: SmartTools
Didier Parigot, Carine Courbis, Pascal Degenne, Alexandre Fau
Claude Pasquier, Joël Fillon, Christophe Held, Isabelle Attali
INRIA Sophia-Antipolis - OASIS project
2004, route des Lucioles - BP 93
06902 Sophia-Antipolis cedex, France
First.Last@sophia.inria.fr
Abstract
SmartTools is a semantic framework generator, based on XML and object technologies. Thanks to a process of automatic generation from specifications, SmartTools makes it possible to quickly develop environments dedicated to domain-specific and programming languages. Some of these specifications (XML, DTD, Schemas, XSLT) are issued from the W3C which is an important source of varied emerging domain-specific languages. SmartTools uses object technologies such as visitor patterns and aspect-oriented programming. It provides code generation adapted to the usage of those technologies to support the development of semantic analyses. In this way, we obtain at minimal cost the design and implementation of a modular development platform which is open, interactive, uniform, and most important prone to evolution.
Key words: software generation, development environment,
semantic analyses, aspect-oriented programming, visitor pattern,
program transformation, XML, XSLT.
1 Introduction
With new technologies related to data processing for Internet applications, the concept of language is more and more used to structure information. Therefore, the World Wide Web Consortium (W3C) has introduced new formalisms such as DTDs (Data Type Definitions) or Schemas that popularize the concept of abstract syntax, the basic component to manipulate any program. Additionally, the software quality and the development speed are of major concern in this particular application area. That justifies the creation of a software generator strongly based on XML (eXtensible Markup Language) and object technologies, named SmartTools.
This is a preliminary version. The final version will be published in
Electronic Notes in Theoretical Computer Science
URL: www.elsevier.nl/locate/entcs
The main goal of this software generator is to help designers of domain-specific or programming languages. No more than one specification (e.g. a DTD) is needed to quickly produce a dedicated development environment. Both the target environment and the SmartTools framework must fulfill the following requirements:
- easy to use with a minimal knowledge and based on well-known techniques or standard specifications,
- modular and flexible implementation based on re-usable and generic components, and on a distributed software architecture,
- user-friendly thanks to a Graphic User Interface (GUI) that offers multi-views and an interactive environment,
- open thanks to a standard data exchange format used to communicate with its components and other external applications.
To ease the development of semantic analyses, several techniques have been introduced into SmartTools. First, the solution of visitor design pattern [8] was largely automated with the generation of Java source code from abstract syntax definitions. Second, the aspect-oriented programming was added to obtain more re-usable semantic components. This new functionality does not require any program transformation. Thus, the addition of aspects on a visitor can be completely dynamic (without recompilation). Section 2 presents these semantic tools.
To meet with the architecture requirements, the modular software architecture was built around a central software component: the message controller. SmartTools is made of several independent software components that communicate with each other by exchanging asynchronous messages. The XML technologies are used to encode these messages. In Section 3, the modular architecture of SmartTools is described.
Concerning the interactive requirements, SmartTools has an extensible and modular GUI with a set of pretty-printers or viewers strongly based on XML technologies. For data integration and to be open to new application fields, the XML format is used for all data exchange between components and as an description language for new applications. These interactive functionalities are presented in Section 4.
About the re-usability requirement, SmartTools uses and provides several advanced software technologies stemming from various research works [2,4,10,11,15,22] but homogeneously gathered together. In fact, web applications and the emergence of XML technologies are assets for a large diffusion and new application fields for this software generator.
2 Semantic Tools
Internally, SmartTools uses extended and strongly typed abstract syntax (AST) definitions for all its tools. The important notions of these definitions are: operators and types. The operators are gathered into named sets: types. The sons of operators are typed and named. Figure 1 shows the definition of our toy language: tiny\(^1\). For example, the affect operator belongs to the Statement type and has two sons: the first one is of type Var and the second one of type Exp.
\[
\text{Formalism of tiny is} \\
\text{Root is STop;} \\
\text{Top = program(Decls declarationList, Statements statements);} \\
\text{Decls = decls(Decl[] declarationList);} \\
\text{Decl = intDecl[Var variable], booleanDecl[Var variable];} \\
\text{Statements = statements(Statement[] statementList);} \\
\text{Statement = affect(Var variable, Exp value),} \\
\text{while(ConditionExp cond, Statements statements),} \\
\text{if(ConditionExp cond, Statements statementsThen),} \\
\text{Statements statementsElse);} \\
\text{ConditionOp = equal(ArithmeticExp left, ArithmeticExp right),} \\
\text{notEqual(ArithmeticExp left, ArithmeticExp right);} \\
\text{ConditionExp = } \%\text{ConditionOp, true(), false(), var;} \\
\text{ArithmeticOp = plus(ArithmeticExp left, ArithmeticExp right),} \\
\text{minus(ArithmeticExp left, ArithmeticExp right),} \\
\text{mult(ArithmeticExp left, ArithmeticExp right),} \\
\text{div(ArithmeticExp left, ArithmeticExp right);} \\
\text{ArithmeticExp = } \%\text{ArithmeticOp, int as STRING, var as STRING;} \\
\text{Exp = } \%\text{ArithmeticOp, } \%\text{ConditionOp, var, int, true, false;} \\
\text{Var = var;} \\
\text{End}
\]
Fig. 1. the AST definition of tiny
From any AST definition, SmartTools can automatically generate a structured editor specific to the language. To facilitate the editing (to copy-paste nodes), it is useful to make the type inclusion\(^2\) possible.
We want, as much as possible, to use existing software components stemming from the W3C standards, such as the DOM (Document Object Model) API to handle XML documents. But, this latter API does not consider strongly typed structures. To manipulate strongly typed trees, we have extended it with the notions of fixed node, listed node and typed node (c.f. Figure 2). In this way, the tree consistency is guaranteed by the Java type-checker at its construction. For each operator, SmartTools automatically generates one class and the associated interface (Figure 3 shows the interface generated for the affect operator), and one interface by type. These classes contain the getters and setters needed to handle the sons (e.g. getValueNode, setValueNode).
It is important that the language designers can define their languages (abstract syntax) by using standard formats (DTD or Schema) proposed by the W3C and not necessarily with the internal AST definition format of SmartTools. Therefore, we have implemented conversion tools with some restrictions. For example, the notion of type does not explicitly exist within the
\(^1\) used all along this article
\(^2\) marked with the % sign in Figure 1
Fig. 2. Class hierarchy for the *affect* operator
```
package tiny.ast;
public interface AffectNode extends StatementType {
public tiny.ast.VarType getVariableNode();
public tiny.ast.ExpType getValueNode();
public void setValueNode(tiny.ast.ExpType tree);
public void setVariableNode(tiny.ast.VarType tree);
}
```
Fig. 3. Generated *affect* operator interface: *AffectNode*
DTD format i.e. the elements (seen as operators) do not belong to named sets. As this notion was essential, we had to define a type inference mechanism to convert DTDs. Additionally, the right part of element definitions should only contain parameter entity references to indicate the types of the sons (e.g. the line 6 of Figure 4 shows a DTD-equivalent definition of the *affect* operator). Unfortunately, few DTDs are written in this way. To be able to accept as many DTDs as possible, a more complex type analysis (type inference) was carried out.
```
<!ENTITY % Top 'program'>
<!ENTITY % Statements 'statements'>
<!ENTITY % Statement 'if|while|affect'>
<!ELEMENT program (%Decl;),(%Statements;)>
<!ELEMENT statements (%Statement;)>
<!ELEMENT affect (%Var;),(%Exp;)>
```
Fig. 4. Part of the generated DTD of *tiny*
Moreover, we have implemented generators that produce a parser and the associated pretty-printer to manipulate programs with a more readable format than the XML one. For this purpose, the designer has to provide extra attributes information on each element (or operator) definition (see attributes in Figure 5). This possibility is useful for designers that do not have expertise on how to write a parser and makes sense only for small and unambiguous languages.
Figure 6 shows all the specifications that can be generated from an AST specification:
Fig. 5. Extra data of the `affect` operator useful for generating a parser and the associated pretty-printer
- the API of the language (i.e. one class and the associated interface by operator, and one interface by type),
- the basic visitors useful for creating semantic analyses,
- a parser for the language (if extra syntactic sugars are provided as operator attributes in the language definition),
- a pretty-printer to unpars ASTs according to these extra syntactic sugars,
- a minimal resource file that contains useful information for the structured editor and the parser,
- the DTD or the Schema.
Fig. 6. All the specifications generated from an AST
For example, thanks to these tool generators, the `tiny` environment was automatically generated only from one AST specification (see Figure 1 page 3), one xprofile specification (see Figure 7), and the type-checker visitor (100 Java lines).
**Semantics**
This sub-section presents ways to write analyses (e.g. a type-checker, an evaluator or a compiler) on programs by using the visitor design pattern. If the reader wants to have more details and explanations on this well-known methodology, he can refer to [8,20,21]. For instance, we present three extensions of the visitor pattern technique: v1 using reflexivity mechanism with profiled visits and tree traversal possibilities, v2 adding simple aspect-oriented
programming, v3 splitting the tree traversal (visit method calls) and the semantic actions by using more complex aspects.
**Reflexive visitors (v1)**
To make the development of visitors based on the AST definitions easier, SmartTools automatically generates two visitor classes: `AbstractVisitor` and `TraversalVisitor`. The abstract visitor declares all the visit methods (one by operator). The `TraversalVisitor` inherits from the `AbstractVisitor` and implements all the visit methods in order to perform a depth-first tree traversal. This visitor can be extended and its visit methods refined (overridden) to specify an analysis.
Thanks to the xprofile specification language of SmartTools, it is possible to specify the visit signatures i.e. to generate visits with different names, return types, and parameters. The granularity of this personalization is at the (AST) type level. Figure 7 presents the xprofile specification of a type-checker for tiny. From this specification, the system automatically generates the two correctly-typed visitors (AbstractVisitor and TraversalVisitor). Only useful visit methods have to be overridden to implement the type-checker (see Figure 8 for the affect operator). The advantage of using profiled visits is to avoid casts and obtain more readable visitor programs.
```java
public Object check(AffectNode node, TinyEnv env) throws VisitorException {
String varName = node.getVariableNode().getValue();
String typeLeft = env.getType(varName);
String typeRight = check(node.getValueNode(), env); //visit the value node
if (typeLeft == null)
errors.setError(node, "This variable " + varName + " was not declared");
else {
if (!typeRight.equals(TinyEnv.ERROR) && (!typeLeft.equals(typeRight)))
errors.setError(node, "Incompatible types: " + varName + " is a " +
typeLeft.equals(TinyEnv.INT) ? "int", "bool" : "variable");
}
return null;
}
```
Fig. 7. Visit signatures of a type-checker for tiny
Fig. 8. Affect visit of the type-checker
With the xprofile language, it is also possible to specify the tree traversal (from the starting node to the destination node(s)) of a visitor. Thus, only the nodes on the path are visited instead of all the nodes of the tree. It reduces the visitor runtime on sizeable trees and above all the size of the generated visitors. A dependence graph analysis on the AST definition is performed to generate the corresponding abstract and traversal visitors with the 'right' visits according to the given path. For example with the traversal specified on Figure 9, only the visits of the while and affect operators and the visits of the operators contained between the root (TOP) and these operators (i.e program, statements and if according to the AST definition of Figure 1 page 3) will be called.
<table>
<thead>
<tr>
<th>Fig. 9. Traversal specification from the root (TOP) to while and affect</th>
</tr>
</thead>
<tbody>
<tr>
<td>Traversal Test:</td>
</tr>
<tr>
<td>Top -> while, affect;</td>
</tr>
</tbody>
</table>
In SmartTools, we use the Java reflexivity mechanism to implement the visitor technique and not the classical solution of a specific method, usually denoted accept, defined on each operator³. Indeed, the introduction of a visitor profile prohibits from using this classical solution (accept method). A generic method (named invokeVisit) is executed when any visit method is called. The goal of this generic method is to invoke the 'right' visit method (with a strongly-typed node) by using reflexivity.
The use of reflexivity is runtime-expensive. To accelerate the invoke process, an indirection table is statically produced at compilation-time when the abstract visitor is generated. This table contains for each pair (operator, type) the Java reference to the visit java.lang.reflect.Method object to call. With this table, it is also possible to change the visit method name and to have different arguments. This solution is a simplification of the multi-method approach that dynamically performs the search of the best method to apply. We have compared these two approaches by using a Java multi-method implementation [7]. The performances are equivalent, but our approach is much easier to realize.
**Visitors with Aspect (v2)**
The reflexivity mechanism used to implement the visitor pattern technique makes the execution of additional code before or after the visit calls possible. In this way, a concept of aspect-oriented programming [12,14] specific for our visitors can be added without modifying the source code, unlike the first versions of AspectJ [1,13]. An aspect can be defined just by implementing the Aspect interface and then recorded (see methods on Figure 10) on any visitor.
³ SmartTools can also help designers to develop this kind of efficient visitors. But, their codes are less readable (more casts, no aspect, no tree traversal choice, etc) than the v1 or v2 visitors. Therefore, we do not describe them in this article.
For example, if the aspect of Figure 11 is recorded on a visitor, it will trace out all the called visits.
<table>
<thead>
<tr>
<th>VisitorImpl</th>
</tr>
</thead>
<tbody>
<tr>
<td>+visit(node:Node,params:Object): Object</td>
</tr>
<tr>
<td>+invokeVisit(params:Object[]): Object</td>
</tr>
<tr>
<td>+addAspect(aspect:Aspect): void</td>
</tr>
<tr>
<td>+removeAspect(aspect:Aspect): void</td>
</tr>
<tr>
<td>+addAspectOnOperator(op:Operator,aspect:Aspect): void</td>
</tr>
<tr>
<td>+removeAspectOnOperator(type:Type,aspect:Aspect): void</td>
</tr>
</tbody>
</table>
Fig. 10. Visitor with aspect (v2) API
```java
package fr.smarttools.debug;
import fr.smarttools.tree.visitorpattern.Aspect;
import fr.smarttools.tree.Type;
public class TraceAspect implements Aspect {
public void before(Type t, Object[] param) {
System.out.println("Start visit on " + param[0].getClass());
}
public void after(Type t, Object[] param) {
System.out.println("End visit on " + param[0].getClass());
}
}
```
Fig. 11. Aspect that traces out the visit methods
Several aspects can be connected on a visitor. They are executed in sequence (according to the registration order). This connection (as well as the disconnection) can be done at runtime. The behavior of a visitor can thus be modified dynamically by addition or withdrawal of these aspects. For example, a graphical debug mode for the visitors with a step-by-step execution was specified as an aspect regardless of any visitor. To add these aspects on the v1 visitors, the generic method (invokeVisit) was extended.
Visitor with Tree Traversal and complex Aspects (v3)
With the concept of aspect-oriented programming, it is possible to split the tree traversal (visit method calls) and the semantic processing (semantic actions). Let us suppose that the visit code of the affect(Var, Exp) operator has this shape:
```java
visit(AffectNode node ...) {
codeBefore
visit of the first son
codeBetween_1_2
visit of the second son
codeAfter
}
```
One can observe that the semantic part (i.e. all except the recursive calls) is divided into N sons + 1 pieces of code. These N+1 pieces can be treated like aspects with new points of anchoring, i.e. before, between and after the visit method calls of the sons. We have defined a new visitor (named v3 visitor) that takes as arguments a tree traversal and one or more semantic actions (i.e. in the form of aspects) as shown on Figure 12. This visitor can call these aspects on these new points of anchoring. Therefore, these aspects must have for each operator, in addition to the traditional before and after methods, the
betweeni_i+1 methods (code to be executed between the ith and i+1th sons).
This new visitor can connect one or more aspects described in the v2 visitors. Figure 13 shows the type-checker semantics associated with the affect operator using this new form of aspect. There is no more recursive call unlike the v1 (see Figure 8 page 6 line 4) or v2 visitors but it is necessary to use stacks (see Figure 13 lines 5 and 6) to transmit the visit results of the sons.
```java
public class Semantic
{
public void before(AffectNode node, Object param) {}
public void after(AffectNode node, Object param) {}
public void before1_2(AffectNode node, Object param) {}
public void between(M) {
String typeRight = (String)typeStack.pop();
String typeLeft = (String)typeStack.pop();
same if code than Figure 8 (lines 6 to 12)
}
}
```
Fig. 12. v3 visitor
Fig. 13. Type-checker of the affect operator
The type-checker of tiny was extended with an initialization check on variables (see Figure 14) only by composing the two aspects (see Figure 15). The main interest of this programming style is to make the extension of analyses possible without modification only by adding new aspects. In this way, analyses are modular and re-usable. However, these analyses are more complex to program because of the splitting of the semantics and the tree traversal (compare Figures 13 and 8 page 6). Currently, we study how to share data between semantics, problems linked to the common tree traversal (e.g. what to do if one semantics wants to loop on a node and not the others?) ; we also study mechanisms to ease the programming of these aspects by hiding the stack management.
For the v3 visitor (see Figure 12), there is also a generic method that manages the next node to visit according to the current position, the tree traversal and some special traversal instructions. This method also copes with
SMARTTOOLS
```java
public void before(AffectNode node, Object param) {unplugVariableCheck = true;}
public void visit1(AffectNode node, Object param) {unplugVariableCheck = false;}
public void after(AffectNode node, Object param) {}
```
Fig. 14. Initialization check for the affect operator (v3 visitor)
```java
TypeCheckerVisitor typeCheck = new TypeCheckerVisitor();
TinyEnv env = typeCheck.getEnv();
InitVarCheckerVisitor initVarCheck = new InitVarCheckerVisitor(env);
new Visitor(new LeftToRightTreeTraversal(),
new Semantics[]{typeCheck, initVarCheck}).start(tree, null);
```
Fig. 15. Composition of two aspects
the search of the next method to call and the invocation of the v2 aspects on these visits.
3 Architecture
SmartTools is composed of independent software modules that communicate with each other by exchanging asynchronous messages. These messages are typed and can be considered as events. Each module registers itself on a central software component, the message controller (c.f. Figure 16), to listen to some specific types of messages. It can react to them by possibly posting new messages. The controller is responsible for managing the flow of messages and delivering them to their specific destination(s). The components of SmartTools are thus event-driven. This section presents the different modules of SmartTools and describes the behavior of the message controller.

Fig. 16. Architecture of SmartTools
The main software modules of SmartTools are the following:
- Each **document** contains an AST. In Figure 16, *Document 1* and *Document 2* contain the ASTs on which the user is working. *Document GI* is a special one. It contains the AST describing the structure of the GUI (e.g. the AST of the Figure 23 page 16).
- The **user interface** module manages the views, the menus and the toolbar of SmartTools.
• Each view is an independent module showing the content of a document in a format depending on the type of the view. For example, some views display the tree in colored-syntax text format, others as a graphical representation.
• The parser manager chooses the right parser to use for a file. Then, it runs the parser and builds the corresponding AST. The document manager uses this tree to build a document module and connects it to the message controller.
• The base is a module that contains definitions of resources used in SmartTools: colors, styles, fonts, menus, toolbars, actions, etc.
Of course, new types of modules can register themselves on the message controller. That is one of the ways to extend the features of SmartTools for a specific purpose or to embed SmartTools in another environment.
When a module needs to communicate with another module, it creates a message and posts it on the message controller. Then, the message controller broadcasts this message to the appropriate listeners (modules) that will react to it. Thus, modules that want to receive special types of messages from the message controller have to become listeners of these types of messages. They have to implement the MsgListener interface and provide a receive(XxxMsg) method for every type of supported message. Then, they have to register on the message controller (see code just below) and obtain their unique module identifier from it.
```java
idDoc = msgController.register(this);
XxxMsg in the receive method stands for the class of the expected message. Messages are typed objects i.e there is one specific class for every type of message. Their common behavior is held in one abstract class that is the super class of all the messages. New kinds of messages can be created by extending that common class or any other existing message class.
In the following example, the module expects to receive SelectMsg, CloseDocMsg and CutMsg messages sent to the module identified by idDoc and coming from an anonymous sender.
```java
msgController.addMsgListener("SelectMsg", idDoc, Msg.ANONYMOUS);
msgController.addMsgListener("CloseDocMsg", idDoc, Msg.ANONYMOUS);
msgController.addMsgListener("CutMsg", idDoc, Msg.ANONYMOUS);
```
Documents (i.e ASTs) and views are independently registered on the message controller. A document does not need to know how many views are related to it. When a modification is made, the document posts a modification message. The type of that message indicates which modification has been done and the message body contains the path of the modified node (from the root of the tree). For some kinds of messages, the change is also specified. Such messages will be sent only to the views that are registered to receive these modification messages coming from this document. Other modules will not receive them.
The message controller has a built-in message filtering capability. It is possible to write filters that watch or influence the flow of input and output
messages on the controller. That filtering capability has been successfully used for several specific needs: benchmarking, debugging, undoing user actions, and automatically translating messages into another format (SOAP messages).
The architecture of SmartTools is designed to ease connection with other development environments or tools. Some experiments [23] are in progress to provide several features of SmartTools as Web services and to use them from a client tool running on a .NET platform.
4 Graphical User Interface
SmartTools has a GUI (c.f Figure 17) based on the document/views concept i.e. the user interface is the framework in which views on a document (AST) can be displayed and manipulated. For each open document, it is possible to build and display one or more views showing different aspects of the tree according to different formats. XML technologies are extensively used to build this GUI and the different views.

Fig. 17. An example of Graphical User Interface
A view on a document is built by applying a transformation to its AST. We have experimented with two different approaches to perform tree transformations and build graphical views. The first approach was to write a visitor that transforms the tree and directly builds the hierarchy of graphical components. That was fast and efficient but required to recompile every time a change was done in the transformation. The second technique was to specify a tree transformation using XSLT to produce a BML (Bean Markup Language) description of graphical components to create. The BML result is then interpreted (see Figure 18) to build the actual view. Even though there is a loss of
efficiency when using XSLT and BML engines, the technique has proved to be easier to learn, more open to new view designs, and well-adapted for sending views through networks.

**Fig. 18. Schema of graphical view construction**
**Xpp language**
A higher-level transformation language, called Xpp, has been defined on top of XSLT to specify the pretty-printing of XML documents. Its features are similar to those of XSLT but it is much more concise, more readable and it can perform transformations only on subtrees for incremental purposes. Xpp consists of a set of rule definitions (see Figure 19) which match patterns with explicit variables for subtrees. These variables are used in the right part for recursive calls.
```
Rules
formal tiny
...
affect(x, y) -> h(x, label("="), y, label("="));
plus(x, y) -> h(x, label("="), y);
...
```
**Fig. 19. A part of the Xpp specification**
We have defined formatting functions (horizontal or vertical alignment, indentation, etc.) that designers may use to write their pretty-printers in the right part of the rules. When Xpp specifications are translated into XSLT stylesheets (see Figure 20), the designers only need to indicate the expected output format (either BML, HTML or text at the moment) useful for the system to choose the right implementation of the formatting functions (see Figure 21).
Fig. 20. XSLT program for the \texttt{plus} operator
\begin{verbatim}
<alias:template match="plus[*[1]][*[2]][count(*)=2]">
<alias:variable name="left" select="./[1]"/>
<alias:variable name="right" select="./[2]"/>
<bean class="fr.smarttools.view.GNodeContainer">
<property name="layout">
<bean class="fr.smarttools.view.HFlowLayout">
<property/>
</bean>
</property>
<add>
<alias:apply-templates select="@left"/>
</add>
<add>
<alias:apply-templates select="@right"/>
</add>
<bean class="fr.smarttools.view.FJLabel">
<args>
<string>$left</string>
</args>
<string>$right</string>
</bean>
</bean>
</alias:template>
\end{verbatim}
Fig. 21. From \texttt{Xpp} to XSLT
The \texttt{plus(x,y) \rightarrow h(x,\texttt{label}("+"),y); \texttt{Xpp}} rule specifies that the left and right subtrees for each \texttt{plus} operator will be horizontally aligned and separated by the + sign. The \texttt{h} and \texttt{label} formatting functions are defined in all the available output formats. \texttt{Xpp} can be extended by adding new formatting functions defined for every available output format.
\textit{Mapping between logical and graphical views}
For BML output, every transformation rule specifies how to build a hierarchy of graphical components. Some of these components are associated with nodes of the tree and are marked so. Others are only syntactic sugars and are just ordinary graphical objects (not marked). This marking technique is a convenient way to be able to match any graphical object with its corresponding node in the document tree. When a part of the document tree is modified, an update message is sent to the views of that document. The update message contains the path of the modified subtree and the new subtree. Transformation rules are applied to that new subtree to create a local hierarchy of graphical components: a graphical subtree. The path contained in the update message is interpreted thanks to the marked components and the obsolete graphical subtree is found. It is then replaced by the new one to reflect the document.
tree modification.
**The Base module**
Definitions of style (fonts, colors, etc.) are stored in separate XML resource files that are managed by the Base module. When a view (or any other module) needs style information, the Base module uses visitors to find appropriate information in the resources (represented as ASTs). There are three successive search levels: first on a general resource tree, then on the current language-specific resource tree, and finally on the active view-specific resource tree. At every step, the result is overloaded by the newly found information.
**GUI description language**
A special XML language of SmartTools, called *lmltree*, was designed to describe the structure of the user interface. From such a description, SmartTools builds its user interface by transforming this description with the XSLT engine. The GUI is thus only a view of this description. Figure 22 shows such a description, Figure 23 the schematic graph of its AST, and Figure 17 (page 12) the resulting GUI.
```xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE layout SYSTEM "lm1.dtd">
<layout>
<frame title="Smarttools V3">
<set title="InfiniteMultiplication.exp">
<split position="55" orientation="0">
<view title="Beans view" Type="BmlView" style="default.xml"/>
<split position="50" orientation="1">
<view title="Beans view3" Type="BmlView" style="xml.xml"/>
</split>
</split>
<split position="25" orientation="1">
<view title="Beans view2" Type="BmlView" style="generic.xml"/>
<split position="60" orientation="1">
<view title="Gtree" Type="GtreeView" style="*"/>
<view title="Debug" Type="DebugView" style="*"/>
</split>
</split>
</set>
</frame>
</layout>
```
Fig. 22. Lmltree specification of the GUI of Figure 17 page 12
5 Applications
SmartTools has been used to develop or quickly prototype various environments of several domain-specific languages. Its first applications were dedicated to the languages used by the system itself; it is bootstrapped. For instance, specific environments were created to edit the resources, to manipulate AST definitions or visit method profiles. Much more complex and powerful environments can be created with additional work.
An integrated environment for Java [5] was developed. Figure 24 displays a source file (.java) and its associated class file (.class) on different formats (i.e., using different pretty-printers) as shown on Figure 25. These two documents are linked, thus the selection in one document is communicated to the other. The main tools of this environment are a bytecode type-checker and a bytecode simulator. All these tools use the visitor pattern technique and can be dynamically extended (e.g., with tracing or debugging features) simply by connecting aspects.
As the SmartTools architecture was designed to easily plug new components, servlets can quickly be registered on the message controller. In this way, we have experimented with a distributed version of SmartTools to edit programs on any applet-compatible Web browser thanks to a Java applet. This applet was designed to visualize components expressed in BML and to handle user interactions. It uses the HTTP protocol to communicate with SmartTools through a servlet. A generalization of this experiment (Figure 26) was also performed using Web Services (i.e., units providing data and services
---
4 Its parser was not generated as the Java language is complex
to other applications). In this manner, applications can access to these Web services via standard Web protocols and data formats (e.g. XML, SOAP) without worrying about how the service is implemented.
Fig. 26. How to access to SmartTools
6 Related Work
There are many equivalent or comparable systems [2,4,11,15]. The main difference is that SmartTools strongly uses XML and object-oriented technologies. In this way, our system is open and can take advantage of any further development made around Java and XML technologies. It harmoniously integrates different tools and techniques (e.g. visitor design pattern, aspect) thanks to its modular architecture and has generic visualization tools.
Our visitor approach is strongly based on this research work [20] and very close to other developments [9,16,18]. We essentially use a simplified version of the multi-methods [7,19] instead of using accept methods. In this way, it is possible:
• to obtain much more readable visitor programs (i.e. without cast) thanks to the xprofile specifications,
• to get a simple kind of adaptative programming [12,17] dedicated to our applications thanks to the tree traversal specification,
to introduce an aspect-oriented programming on the top of the visitor design pattern. Our approach is comparable with a more general one [1]. In SmartTools, aspects can be dynamically connected to visitors and no transformation is needed unlike [9].
For the modular architecture, we designed a message controller similar to the Toolbus
[3] but it is restricted to our needs. That was quite an easy and straightforward solution. We plan to study component technologies (such as EJB, CORBA, Web Services, ObjectWeb etc.) to improve the flexibility of the next architecture version. For data integration, we use XML and for control integration a multicasting approach. With a minimal development effort, using existing software components (RMI API) or standard protocols (SOAP protocol), we have obtained a system where it is easy to:
- plug in new components,
- build a distributed environment in connection with a Web browser or the .NET platform,
- transform it into a distributed version using ProActive [6].
For interactive requirements, our approach is different as we use XML technologies. Moreover, we apply the same transformation model for the document as well as for the GUI; that is quite an original way of building GUIs. This approach makes the export of views possible through the networks (thanks to XML serialization).
The usage of W3C specifications as a source format to generate tools is a great asset for SmartTools. Languages designers and end-users can directly take advantage of the non proprietary formats provided but also use other W3C technologies inside SmartTools. In a Web application context, this property is important for applications interoperability.
7 Conclusions
We have presented a software generator which produces programming environments strongly based on XML and object-oriented technologies. The most important contribution of this approach is to propose at the same time and with a uniform way, a set of advanced programming features, integrated into a modular architecture, with extensible graphical viewing engines and open to XML. We have chosen to use non-proprietary APIs to be open and to take advantage of future or external developments around W3C specifications. On the semantic level, we present a dedicated aspect-oriented programming approach associated with the visitor design pattern compliant with the DOM specifications. We expect a large set of domain-specific languages to be based
---
5 The Toolbus uses more sophisticated notions: ATerms to handle trees and a coordination language to connect components
6 The terms data integration and control integration are explained in [3]
on the W3C specifications. The users (and designers) of such languages are not supposed to be experts of language theories. Therefore, we propose a semantic framework easy to use and requiring a minimal knowledge. Domain-specific languages represent a large potential of applications in various fields and will certainly introduce new open problems.
References
|
{"Source-Url": "https://discovery.ucl.ac.uk/id/eprint/878/1/2.9_smartldta02.pdf", "len_cl100k_base": 8241, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 131283, "total-output-tokens": 10903, "length": "2e13", "weborganizer": {"__label__adult": 0.00027108192443847656, "__label__art_design": 0.00026535987854003906, "__label__crime_law": 0.00019478797912597656, "__label__education_jobs": 0.0003135204315185547, "__label__entertainment": 4.029273986816406e-05, "__label__fashion_beauty": 0.00010371208190917967, "__label__finance_business": 0.00012373924255371094, "__label__food_dining": 0.0002180337905883789, "__label__games": 0.0002982616424560547, "__label__hardware": 0.0004839897155761719, "__label__health": 0.0002081394195556641, "__label__history": 0.00014591217041015625, "__label__home_hobbies": 5.364418029785156e-05, "__label__industrial": 0.00024127960205078125, "__label__literature": 0.00012969970703125, "__label__politics": 0.00015354156494140625, "__label__religion": 0.0003380775451660156, "__label__science_tech": 0.0029773712158203125, "__label__social_life": 5.125999450683594e-05, "__label__software": 0.004486083984375, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00020372867584228516, "__label__transportation": 0.0002911090850830078, "__label__travel": 0.00016176700592041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43142, 0.01314]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43142, 0.59589]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43142, 0.82942]], "google_gemma-3-12b-it_contains_pii": [[0, 2114, false], [2114, 4602, null], [4602, 7717, null], [7717, 9489, null], [9489, 10867, null], [10867, 12936, null], [12936, 15813, null], [15813, 18325, null], [18325, 20262, null], [20262, 22154, null], [22154, 25148, null], [25148, 26851, null], [26851, 28282, null], [28282, 30435, null], [30435, 32826, null], [32826, 34048, null], [34048, 35230, null], [35230, 37880, null], [37880, 40239, null], [40239, 43142, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2114, true], [2114, 4602, null], [4602, 7717, null], [7717, 9489, null], [9489, 10867, null], [10867, 12936, null], [12936, 15813, null], [15813, 18325, null], [18325, 20262, null], [20262, 22154, null], [22154, 25148, null], [25148, 26851, null], [26851, 28282, null], [28282, 30435, null], [30435, 32826, null], [32826, 34048, null], [34048, 35230, null], [35230, 37880, null], [37880, 40239, null], [40239, 43142, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43142, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43142, null]], "pdf_page_numbers": [[0, 2114, 1], [2114, 4602, 2], [4602, 7717, 3], [7717, 9489, 4], [9489, 10867, 5], [10867, 12936, 6], [12936, 15813, 7], [15813, 18325, 8], [18325, 20262, 9], [20262, 22154, 10], [22154, 25148, 11], [25148, 26851, 12], [26851, 28282, 13], [28282, 30435, 14], [30435, 32826, 15], [32826, 34048, 16], [34048, 35230, 17], [35230, 37880, 18], [37880, 40239, 19], [40239, 43142, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43142, 0.03429]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
08bb8f9bf5dc5a52fe3feda902f8651a40fd147e
|
1 Introduction
Despite several decades of research into the Polyhedral model, there is still no general-purpose production compiler using the Polyhedral model internally. The situation is changing with the demonstration of the scalability of polyhedral algorithms and with the widespread dissemination of multicore processors and hardware accelerators. Two proprietary polyhedral compilers are in development: the R-Stream compiler from Reservoir Labs [26], and IBM’s polyhedral extension of its XL compiler suite [35].
This paper describes the GRAPHITE compilation pass of GCC, embedding polyhedral analyses and transformations into the GNU Compiler Collection (GCC) [32]. Polyhedral information is extracted directly from the GIMPLE intermediate representation, in three-address, Static Single Assignment (SSA) form. This is a major difference with traditional source-to-source polyhedral compilers which operate on high-level abstract syntax. Operating directly on the three-address code brings in new challenges but also new opportunities: we can leverage existing analyses in the compiler and interact with a wealth of optimizations.
GRAPHITE is based on the polyhedral representation designed by Girbal et al. [13]. This rich algebraic representation enables the composition of polyhedral generalizations of classical loop transformations, decoupling them from the syntactic form of the program. Classical transformations like loop fusion or tiling can be composed in any order and generalized to imperfectly-nested loops with complex domains, without intermediate translation to a syntactic form (avoiding code size explosion). GRAPHITE also aims at providing precise performance models and profitability prediction heuristics. Its applications include automatic parallelization and vectorization, offloading of computational kernels onto hardware accelerators, memory hierarchy usage optimizations, cost modelling and static code analysis (e.g., static debugging of parallel programs).
The paper is structured as follows. Section 2 discusses related work. Section 3 describes the design of GRAPHITE. Section 4 explores the current optimizations implemented in GRAPHITE. Representation of pointer accesses is an original issue for polyhedral compilation and is presented in Section 5, before the conclusion in Section 6.
2 Related work
There have been many efforts in designing an advanced loop nest transformation infrastructure. Most loop restructuring compilers introduced syntax-based models and intermediate representations. ParaScope [10] and Polaris [6] are dependence based, source-to-source parallelizers for Fortran. KAP [19] is closely related to these academic tools.
SUIF [16] is a platform for implementing advanced compiler prototypes. PIPS [17] is one of the most complete loop restructuring compiler, implementing polyhedral analyses and transformations (including affine scheduling) and interprocedural analyses (array regions, alias). Both of them use a syntax tree extended with polyhedral annotations, but not a unified polyhedral representation.
The MARS compiler [28] unifies classical dependence-based loop transformations with data storage optimizations. However, the MARS intermediate representation only captures part of the loop information (domains and access functions): it lacks the characterization of iteration orderings through multidimensional affine schedules.
The first thorough application of the polyhedral representation was the Petit tool [20], based on the Omega library [23]. It provides space-time mappings for iteration reordering, and it shares our emphasis on per-statement transformations, but it is intended as a research tool for small kernels only. We also use a code generation technique that is again significantly more robust than the code generation in Omega [4].
Semi-automatic polyhedral frameworks have been designed as building blocks for compiler construction or (auto-tuned) library generation systems [21, 9, 39, 8, 36]. They do not define automatic methods or integrate a model-based heuristic to construct profitable optimization strategies.
The GRAPHITE project was first announced by Pop et al. in 2006 [32] but real development work started only one year later: the number of changes committed to the GRAPHITE branch are presented in Figure 2. The design of GRAPHITE is largely borrowed from the WRap-IT polyhedral interface to Open64 and its URUK loop nest optimizer [13]. The CHiLL project from Chen et al. revisited the URUK approach focusing on source-to-source transformation scripting [8, 36]. Unlike URUK and CHiLL, GRAPHITE aims at complete automation, possibly resorting to iterative search or statistical modeling of the profitability of program transformations. Besides, unexpected
design and implementation issues have arisen, partly due to the design of GCC itself, but mostly due to the integration of the polyhedral representation in a general-purpose compilation flow, such as pointers, profile data, debugging information, resource usage (compilation time), pass ordering, interaction among passes, etc.
3 Design
The polyhedral analysis and transformation framework called graphite is implemented as a pass in the GNU Compiler Collection compiler. The main task of this pass is to: extract the polyhedral model representation out of the GCC three-address GIMPLE representation, perform the various optimizations and analyses on the polyhedral model representation and to regenerate the GIMPLE three-address code that corresponds to transformations on the polyhedral model. This three-stage process is the classical flow in polyhedral compilation of source-to-source compilers [13, 7]. Because the starting point of the graphite pass is the low-level three-address GIMPLE code instead of the high-level syntactical source code, some information is lost: the loop structure, loop induction variables, loop bounds, conditionals, data accesses and reductions. All of this information has to be reconstructed in order to build the polyhedral model representation of the relevant code fragment.
Figure 1 shows the stages inside the graphite pass: (1) the Static Control Parts (SCoPs) are outlined from the control flow graph, (2) polyhedral representation is constructed for each SCoP (GPOLY construction), (3) data dependence analysis and transformations are performed (possibly multiple times), and (4) GIMPLE code corresponding to transformed polyhedral model is regenerated (GLOOG). The details of each stage are given in the following subsections.
Fig. 5. GIMPLE code with CFG.
Fig. 6. Single-element arrays inserted to handle scalar dependences and reductions
Listing 1.1. Matvect
for (i = 0; i < N; i++) {
b[i] = 0;
for (j = 0; j < N; j++)
b[i] += A[i][j] * x[j];
}
Fig. 7. LST tree
β_{bb_3} = \{(i) \mid 0 \leq i \leq N - 1\}
β_{bb_4} = \{(i, j) \mid 0 \leq i \leq N - 1 \land 0 \leq j \leq N - 1\}
β_{bb_5} = \{(i) \mid 0 \leq i \leq N - 1\}
F_{dr_1} = \{(i, a, s_1) \mid a = 0 \land s_1 = i \land 0 \leq s_1 \leq N - 1\}
F_{dr_2} = \{(i, j, a, s_1) \mid a = 1 \land s_1 = j \land 0 \leq s_1 \leq N - 1\}
F_{dr_3} = \{(i, j, a, s_1, s_2) \mid a = 2 \land s_1 = i \land s_2 = j \land 0 \leq s_1, s_2 \leq N - 1\}
F_{dr_4} = \{(i, a, s_1) \mid a = 0 \land s_1 = i \land 0 \leq s_1 \leq N - 1\}
θ_{bb_3} = \{(i, t_1, t_2, t_3) \mid t_1 = 0 \land t_2 = i \land t_3 = 0\}
θ_{bb_4} = \{(i, j, t_1, t_2, t_3, t_4, t_5) \mid t_1 = 0 \land t_2 = i \land t_3 = 1 \land t_4 = j \land t_5 = 0\}
θ_{bb_5} = \{(i, t_1, t_2, t_3) \mid t_1 = 0 \land t_2 = i \land t_3 = 2\}
Fig. 8. Components of the polyhedral representation of the GIMPLE code
3.1 SSA based SCoP outlining
The scope of the polyhedral program analysis and manipulation is a sequence of loop nests with constant strides and affine bounds. It includes non-perfectly nested loops and conditionals with boolean expressions of affine inequalities.
The maximal Single-Entry Single-Exit (SESE) region of the Control Flow Graph (CFG) that satisfies those constraints is called a Static Control Part (SCoP) \[13, 7\]. GIMPLE statements belonging to the SCoP should not contain calls to functions with side effects (\texttt{pure} and \texttt{const} function calls are allowed) and the only memory references that are allowed are accesses through arrays with affine subscript functions.
Since the \textsc{graphite} pass is scheduled at the stage where three-address code is in Static Single-Assignment form, all the analyses based on the SSA are available for use in \textsc{graphite}. This is crucial: in order to perform SCoP outlining the \textit{scalar evolution} analysis framework of GCC is used \[33\]. Scalar evolution relies on SSA form to compute closed form expressions for induction variables. These closed forms are represented by structures called \textit{CHains of RECurrences} (CHREC).
Chains of recurrences might represent induction variables (loop induction variables, array access subscript functions) that are affine or non-affine. For example, the scalar evolution of the j_{-22} variable in the basic block 4 (Figure 5) is expressed as follows: \(\{0, +, 1\}_2\), meaning that the starting value of the induction variable is 0, and it is incremented by 1 in each iteration of the loop number 2 (loop corresponding to basic blocks 5 and 6).
SCoP outlining proceeds as follows: first, a new SCoP region is opened, and then the basic blocks of the CFG are scanned in the dominator order. If the basic block contains a statement that is not representable in the polyhedral model then the whole basic block is deemed difficult so the current SCoP region is closed and a new SCoP is opened at the basic block dominated by the difficult basic block. Figure 3 shows one SCoP containing all the basic blocks, whereas Figure 4 shows how the difficult statement causes multiple SCoPs to be formed.
There exist limitations to the SCoP detection algorithm currently implemented in \textsc{graphite}: for example, determining whether the scalar evolution of a variable could be handled in the polyhedral model, or whether the variable should be considered a parameter of the SCoP. We think that a detection of SCoPs based on the structured CFG with SESE regions \[18\] would be more appropriate. A structural SCoP detection traverses the regions tree starting from the outermost SESE region, and tries to prove that all the statements in that maximal region can be handled in the polyhedral representation. When a difficult statement is detected, the statement is analyzed in all the regions containing it, from the outermost region to the innermost one, until either the statement is simple enough in a smaller region, or the region is the statement itself, in which case the statement cannot be handled at all. We consider the integration of a structural SCoP detection algorithm in future versions of GCC.
### 3.2 Construction of the polyhedral representation
Once the SCoPs are outlined, the polyhedral information is built for each basic block contained in a SCoP. The polyhedral representation consists essentially of three components: iteration domains, schedules, and data accesses.
The polyhedral information attached to each basic block in a SCoP is internally called \texttt{GPOLY}. All the components of the polyhedral model are represented as a system of affine equalities or inequalities, and for that purpose a \textit{polyhedral library} is used. Currently, the Parma Polyhedra Library (PPL) \[3\] is used, but the representation is designed to accommodate other similar libraries.
Once again, the \textit{scalar evolution} analysis framework is used to deduce the affine form of the loop bounds and global parameters (to build the iteration domains), memory addressing expressions (for the data accesses). Initial scheduling functions for each basic
block are deduced from the Loop Statement Tree (LST) showing the relative ordering of the basic blocks, and initial nesting structure of the loops. An example of the LST is given in Figure 7. More details explaining all the components of the polyhedral model are given in the following subsection.
Contrary to source-to-source polyhedral compilers [17, 31, 25], we have chosen to represent the schedules and the domains on a per Basic Block (BB) granularity instead of on a per statement granularity. This choice is somewhat rigid, since it prevents the independent scheduling of GIMPLE statements belonging to the same basic block. On the other hand, constructing the polyhedral representation per basic block might provide greater scalability. SCoP control and data flow are represented with three components of the polyhedral model [13, 7, 34]:
**Iteration domains** capture the dynamic instances of all basic blocks — all possible values of surrounding loop induction variables — through a set of affine inequalities. Each dynamic instance of a basic block \( S \) is denoted by a pair \((S, i)\) where \( i \) is the iteration vector containing values for the loop induction variables of the surrounding loops, from outermost to innermost. The dimension of iteration vector \( i \) is \( d_S \). If the basic block \( S \) belongs to a SCoP then the set of all iteration vectors \( i \) relevant for \( S \) can be represented by a polytope:
\[
D_S = \{ i | D_S \times (i, g, 1)^T \geq 0 \}
\]
This is called the iteration domain of \( S \), where \( g \) is the vector of global parameters whose dimension is \( d_g \). Global parameters are invariants inside SCoP, but their values are not known at compile time (parameters representing loop bounds for example).
**Data references** capture the memory locations of array data elements on which GIMPLE statements operate. In each SCoP, by definition, the memory accesses are performed through array data references. A scalar variable can be seen as a zero-dimensional array (array with only one dimension and only one element \( A[0] \)). Each array data reference (data reference) inside a basic block is wrapped inside a poly_dr structure which contains the data reference polyhedron. The data reference polyhedron \( F \) encodes the access relation mapping iteration vectors in \( D_S \) to the array subscripts represented by the vector \( s: F = \{(i, a, s) | F \times (i, a, s, g, 1)^T \geq 0\} \). The alias set number \( a \) captures points-to information (pointer aliasing); it allows to represent accesses through arbitrary pointers and will be defined in Section 5.
In contrast to classical polyhedral model representation [11, 22] we have chosen to represent data references as relations. This means that there is no one-to-one correspondence between iteration vector and subscript. This enables us to represent memory regions — when the data reference information is not complete (coming from interprocedural analysis for example). Nevertheless, very often, the correspondence between iteration vectors and data reference subscripts is a functional affine mapping \( s = f(i, g) \).
In previous literature, the problem of link between dependence analysis and the analyses preceding it, like alias analysis, has not been explored. This leads to inefficiency and imprecision in representation, which are further exacerbated by software engineering constraints like modularity and portability. In Section 5, we will see a discussion of the problem and its algorithmic characterization as a hard combinatorial problem.
**Scheduling functions** are also called scattering functions inside GRAPHITE following CLooG’s terminology. While iteration domains define the set of all dynamic instances for each basic block, they do not describe the execution order of those instances. In order to define the execution order we need to give to each dynamic instance the
execution time (date) [11, 22]. This is done in GRAPHITE by constructing the *scattering polyhedron* representing the relation between iteration vectors and time stamp vector $t$: $\theta = \{(t, i) | \Theta \times (t, i, g, 1)^T \geq 0\}$.
Dynamic instances are executed according to the lexicographical ordering of timestamp vectors. By changing the scattering function, we can reorder the execution order of dynamic iterations, thus performing powerful *loop transformations*. More details on the transformations are given in Subsection 3.5.
Given the example GIMPLE code in Figure 5, the components of the polyhedral model representation are given in Figure 8.
### 3.3 Dependence analysis
In order to represent the semantics of the original program in the polyhedral model, the dependence between dynamic instances of statements needs to be represented. The dependences are necessary to guarantee the correctness of loop transformations.
We are considering *data dependences* coming from the reads and writes of array elements. By definition [38], there is a data dependence from the dynamic instance of a basic block $(S_i, i_{S_i})$ to the dynamic instance of basic block $(S_j, i_{S_j})$ if both iteration vectors belong to their respective iteration domains (the execution is feasible), both instances refer to the same memory location and at least one of the data references is write and the instance $(S_i, i_{S_i})$ is executed before $(S_j, i_{S_j})$.
The *Polyhedral Dependence Analysis* (PDA) implemented in GRAPHITE is an instance-wise dependence analysis – meaning that the dependences are represented as polyhedra encoding the dependence relations between basic block instances. If projected to the Cartesian product of two iteration domains [38, 7, 34], the polyhedron encodes the iteration of the source of the dependence and the iteration of the sink of the dependence:
### 3.4 Handling scalar dependences
While the classical dependence analysis in the source-to-source polyhedral compilers considers only the data dependences between arrays (treating scalars as zero-dimensional arrays), this approach is not the most appropriate in the context of three-address code in the SSA form. If we are not considering scalar dependences, we are not capturing all the semantical constraints of the program – the transformed code could be illegal. If we are to convert all scalars to zero-dimensional arrays we would greatly increase the compilation time (polyhedral dependence check is algorithmically costly) and produce inefficient code.
The approach taken in GRAPHITE framework is to classify the scalar dependences into the following categories:
**Intra basic block dependences** occur between scalars inside a basic block. Those dependences are not considered by PDA in GRAPHITE. Since the statements inside the basic block cannot be rescheduled, the scalar dependences between statements inside a same basic block are not affected by polyhedral transformations. Those dependences are captured by use-def chains of the SSA representation.
**Cross basic block dependences** occur between scalars belonging to two different basic blocks. Those scalars are rewritten into zero-dimensional (single-element) arrays, such that PDA considers them as the regular array accesses. An example is given in Figure 6, where new zero-dimensional arrays (called *Cross_BB_sd*) are introduced.
Reduction dependences occur in data flow cycles that contain associative and commutative operations, like an accumulator variable performing a summation over the values of an array. For the regular reductions the new zero-dimensional arrays are introduced (as seen in Figure 6, where Gen_Red and Close_Phi arrays are introduced). If the reduction operator can be proved to be commutative and associative, then the dependences are marked as belonging to such a reduction. The former enables the optimizations, since the reduction operations can be rescheduled, disregarding the data dependences, if proved to be associative and commutative.
3.5 Transformations
According to the compositional approach of polyhedral transformations [13], the composition of multiple loop transformations in the polyhedral model can be expressed as a single scheduling transformation. By modifying the scheduling relations $\theta$ for each basic block, and regenerating the GIMPLE code according to those new schedules, we are able to perform arbitrary rescheduling of the basic blocks inside a SCoP.
In order to preserve the legality of the transformations, the legality check is performed for each data dependence relation.
Given the original data dependence relation $P(S_i, R_k) \rightarrow (S_j, R_l)$ representing the pairs of iterations which need to be executed in the specific order, the other polyhedron $P'(S_i, R_k) \rightarrow (S_j, R_l)$ is computed, giving those pairs of iterations that are violating the original dependence (they are executed in reversed order according to the new schedule). If the intersection of two polyhedra is not empty, then there exists at least one pair of iterations that is executed in the wrong order, thus rendering the transformation illegal. The whole process is called Violated Dependence Analysis [38].
The task of graphite is to look for such transformations that are beneficial for optimizing various criteria, but which are legal at the same time. The simple search heuristic is looking for good transformations, rejecting those which are illegal. This is certainly an iterative process. If none of the transformations seems legal, then no transformation is done. GRAPHITE currently implements loop interchange, loop strip-mining, loop distribution and loop-blocking.
3.6 Code generation
In source-to-source polyhedral compilers, the code generation pass is the last one, generating the new loop structures to scan statement instances in the order defined by the modified schedule.
In graphite it is not the syntactical source code that is the final result of the pass: graphite should be able to regenerate the GIMPLE code. Furthermore, the generated GIMPLE code has to be reinserted back into the CFG, respecting the SSA form invariants and passed to the further passes after graphite.
Multiple loop generation tools exist that operate on the polyhedral model. The most mature one is the CLooG (Chunky Loop Generator) [4]. CLooG is used in graphite as the major component of code generation. Since CLooG is meant for generating syntactic code (mainly C code), it cannot be used directly: CLooG generates an internal representation called CLAST which is a simple abstract-syntax tree containing only loops, conditions, and statements. In our case statements are replaced with basic blocks.
CLooG is fed by the polyhedral representation (GPOLY) and is asked to generate a CLAST. The nodes of the abstract-syntax tree are pointers to original basic blocks. Depending on the loop transformations, the basic blocks might be rescheduled, moved
to other loops, or even replicated (when performing a transformation). The final effect is represented in the CLAST. The CLAST tree is traversed and the basic blocks are put into their new positions in the GIMPLE CFG, loop structures are regenerated and some basic blocks are replicated.
Even in the case of the identity transformation (no schedule modification), the newly generated loops according to the CLAST tree have the new induction variables. All the basic blocks belonging to a SCoP have to be scanned, and the old induction variables have to be replaced with new induction variables.
3.7 Algorithm choices and compilation speed
Polyhedral optimizers have challenges with scalability and GRAPHITE is no exception. While developing GRAPHITE, we have encountered some interesting issues that affect compilation speed and are exploring algorithms choices to improve performance.
One example is loop unrolling. C++ templates are a powerful feature used in many high-performance codes; template meta-programming is combined with inlining to produce specialized loops. This style creates a large abstraction penalty that GCC has chosen to address with an early inner loop unrolling pass. Applications (such as Tramp3D in the GCC test suite) show significant performance improvement through this technique. However, this also affects loop and data dependence analysis for optimizations such as auto-vectorization and GRAPHITE, whose analysis and compilation time grows with the number of variables in each SCoP. We explore ways to tune this unrolling in the presence of GRAPHITE and eventually to implement such unrolling within GRAPHITE itself.
4 Optimizations
Loops that carry no dependence may be good candidates to be parallelized, i.e., different iterations of the loop might be executed simultaneously by multiple threads [1]. In GCC, two main infrastructures are used to accomplish loop parallelization: data dependency analysis and the GNU OpenMP library. OpenMP defines language extensions to C, C++, and Fortran for implementing multithreaded shared memory applications [29]. Automatic generation of such extensions by the compiler relieves programmers from the manual parallelization process. OpenMP support has been implemented in GCC since version 4.2 [27], and together with existing data dependence analyses, opened the door for automatic parallelization in GCC.
Automatic parallelization was first implemented in version 4.3 as a technology preview. It is able to detect loops carrying no dependences, and generate parallel code by creating and inserting the necessary OpenMP structures and support. It is triggered by \(-f\text{tree-parallelize-loops}=x\), where \(x\) defines the number of threads to create.
Generating parallel code Once the GCC auto-parallelizer decides to parallelize a loop, it generates the parallel code using the OpenMP structures to define the parallel section and relevant attributes like the scheduling method, the shared vs. private variables, atomic operations etc.
In Figure 9, we see an example of a sequential loop, and the parallel code generated for it (assuming the number of threads requested by the user is 4). \paral_data is a structure field gathering all the shared data that should be provided for each
The loop is outlined to a separate function, `parloop_loopfn()`, which is run individually by each thread, supplied with the shared data.
The GNU OpenMP library provides two builtins which define the parallel section: `GOMP_parallel_start()` creates the threads. `GOMP_parallel_end()` is a barrier where all the threads are joined. After `GOMP_parallel_start()` is executed, 4 threads are created. Each thread is executing the outlined function, iterating the loop with a different (and exclusive) interval of iterations, represented as start and end at the example. After `GOMP_parallel_end()` is executed, the threads are joined back to one master thread.
```
parloop {
.paral_data.x = &x;
__builtin_GOMP_parallel_start (parloop_loopfn, &.paral_data, 4);
parloop_loopfn (&.paral_data);
__builtin_GOMP_parallel_end();
}
```
```
parloop_loopfn (.paral_data) {
for (i = start; i < end; i++)
(*.paral_data->x)[i] = i + 3;
}
```
**Fig. 9.** (a) sequential loop (b) parallel code generated
Integration of the parallelizer with graphite
The initial analysis used for the parallelizer was based on the Lambda framework [24]. It has been replaced with the GRAPHITE based dependence analysis.
Integrating the parallelizer with GRAPHITE is profitable for a number of reasons:
- GRAPHITE dependence analysis is more accurate than Lambda, hence could detect more parallel loops [38].
- The ability of GRAPHITE to perform long and complex compositions of program transformations enables to extract more parallelism [13] and to optimize for parallelism and locality simultaneously [7].
- Since GRAPHITE is able to represent sequences of loop transformations as a single scheduling transformation, it seems natural to incorporate a cost model into it to control the transformation sequence. Parallelization is a key transformation whose cost and benefit should be applied to such a model, in the hope of deriving the
most profitable combination of loop transformations. We worked on such a cost model in the special case of automatic vectorization [37], extension to more general parallelization and to the management of temporal locality is in progress.
Figure 10 shows a simple example demonstrating the interaction of loop parallelization with another transformation, loop interchange. The original loop is shown in Figure 10(a). The outer loop carries a dependence and therefore can’t be parallelized. Parallelizing the inner loop is possible, but results in executing a synchronization barrier at the end of each outer-loop iteration, therefore executing a synchronization barrier \( i \) times. If, however, we interchange the loop, as shown in (b), we can parallelize the outer-loop, resulting in use of just one barrier.
Automatic parallelization was integrated to GRAPHITE as part of the upcoming GCC4.5. In addition to \texttt{-ftree-parallelize-loops=x, -floop-parallelize-all} is specified to enable it as a GRAPHITE-based transformation.
5 Alias Information and Polyhedra
Alias analysis is an intrinsic module of any compiler as it facilitates any other optimization that involves variable disambiguation, such as scheduling or identifying invariants, redundant subexpressions, etc. For scalability reasons, most compilers use fast but rather imprecise analysis like Anderson’s algorithm [2], a context-insensitive, flow-insensitive subset-based may-alias analysis. GCC relies on an extension of this algorithm that is field-sensitive as well [5, 30].
A data-reference is either a scalar variable, or an array-reference, or an offset of an array by a compile-time constant, or an offset of an array by an index, or a pointer variable. The difference between the latter four types in C is that the first three resolve to a constant pointer (like \texttt{const int *)} referring to a stack location, while the last one can only be resolved to an ordinary pointer (like \texttt{int *}) referring to a heap location.
Example In the following code excerpt, it can be seen that \( a \) and \( p \) may-alias to each other, and so do \( p \) and \( b \), but \( a \) and \( b \) do not.
```c
int a[10], b[10];
void foo (int *p);
```
Most alias analysis algorithms return a points-to relation, where data references are mapped to abstract stack or heap locations called alias sets. We will also refer to this relation as the forward mapping. On the above example it is: \( a \rightarrow \{A_1\}, p \rightarrow \{A_1, A_2\}, b \rightarrow \{A_2\}. \)
In GCC (since version 4.4), the result of the alias-analysis is encoded in an alias oracle that returns the information about presence or absence of may-alias relation between pairs of data references. It can be seen that such a portable interface goes well with various scalar analyses that use it.
The information that is provided by alias-oracle can be represented as an undirected graph, whose vertices correspond to data-references and whose edges represent presence of alias-relationship between pairs of data-references.
The aliasing relation for the previous example can be represented as a graph \( G_a: a \rightarrow p \rightarrow b. \)
It is known that polyhedral dependence analysis for a given SCoP usually makes \( \mathcal{O}(n^2) \) polyhedral operations, when \( n \) is the number of convex polyhedra representing memory references. Dependence analysis can exploit the properties of \( G_a \) such that the number of calls to polyhedral libraries can be reduced. The above examples show
that the dependence analysis could effectively use the information provided by the alias-analysis to its full potential.
To represent the alias information, GRAPHITE creates an additional dimension (the first) for each array reference. This additional dimension, henceforth called alias dimension is indexed by the alias set to which that particular data reference points to. A data reference however, could be a member of more than one alias set. For example, variable \( p \) in the above example, belongs to two alias-sets \( A_1 \) and \( A_2 \). Though this may be because of impreciseness of the algorithm used in the alias analysis, it could as well be because of a true aliasing of the associated memory regions. Hence, GRAPHITE indexes the alias dimension by the disjunction of alias-sets to which that particular data-reference points to.
In the above example, if we let \( D^R_a \), \( D^R_p \) and \( D^R_b \) be the original polyhedral domains of the three variables \( a \), \( p \) and \( b \), then the corresponding memory references, annotated by the alias dimension would be memory reference \([A_1, D^R_a] \), memory reference \([A_1 \lor A_2, D^R_p] \), and memory reference \([A_2, D^R_b] \) respectively.
### Definition: Minimum Edge Clique Cover
For an undirected graph \( G = (V, E) \), the Minimum-Edge Clique Cover is defined to be a collection \( A_1, A_2, \ldots, A_k \) of subsets of \( V \), such that each \( A_i \) induces a complete subgraph of \( G \) and such that for each edge \((u, v) \in E \), there is some \( A_i \) that contains both \( u \) and \( v \). This problem is also called Edge Clique Cover (ECC). Another problem similar to ECC is the Vertex Clique Cover (VCC) that computes a cover on cliques of vertices.
It is easy to see that if there is a clique in \( G_a \), then polyhedral dependence analysis should test for dependence between all variables participating in the clique to determine the nature of dependence between the data-references participating in the clique. If on the other hand, there is no edge between a pair of variables, there is no need for polyhedral operations. This information is equivalent to cliques in \( G_a \). In the above example, it can be seen that there are two 2-cliques: \( \{a, p\} \) and \( \{p, b\} \). Polyhedral dependence analysis should test for dependence between each of these pairs. It however does not need to test for dependence between \( a \) and \( b \).
It is clear that maximizing the clique size in \( G_a \) is helpful. But, it is the edges of \( G_a \) that correspond to possible intersections of memory areas, thereby corresponding to aliasing of data-references. Hence, the problem that minimizes the number of representative elements in the alias-dimension should be an edge-clique, thereby meaning ECC (rather than vertex-clique or maximum clique). Further, as the edges are covered by the representative element, along with all possible edges which could alias with it, the dependence analysis can thus use alias-set representatives to drive its algorithm.
### 5.1 ECC: problem and solution
The ECC problem is an NP-Hard problem (page 194 in [12]). It is different from the more widely studied VCC problem, which is closely related to the graph coloring problem. The VCC solution for a graph \( G \), though being a NP-Hard problem itself – could be trivially found after coloring the complement graph \( G' \).
No fast running and close to optimal heuristic is known for ECC. It may be very hard to solve even in an approximate sense. On the other hand, the other above mentioned problems (VCC and graph coloring) have linear-time and exact solutions for special classes [14].
The algorithms for ECC either solve exactly by a brute-force search, or by using a heuristic developed for VCC or graph-coloring. In Gramm et. al’s paper [15], which
is state of the art for this problem, new algorithms for both methods are suggested. In their paper, the running time of a previously known heuristic is improved from $O(|V||E|^2)$ to a more acceptable $O(|V||E|)$. The major contribution of the paper however, are data-reduction rules that help reduce the input problem size by preprocessing. The method suggested in [15] is that the resultant graph after iteratively applying the rules is usually smaller, and hence can be subjected to either of the above mentioned methods (exact-solution or heuristic) for a faster solution.
5.2 Empirical analysis of alias graphs
We have done an empirical analysis of the graphs that are returned by the alias-analysis currently in GCC.
Of the 4481 graphs from SPEC Cpu 2006 benchmarks, 4367 are trivial, with the definition of trivial being $|V| < 10 \lor |E| < 5$. Only 328 graphs are non-trivial. Of the latter kind of graphs, only 11 graphs are interesting. In the rest of the graphs, every connected component is a clique. In all the graphs, the number of vertices participating in the maximal cliques vary in the range $1 \leq |V| \leq 90$.
From the above empirical analysis, one could say that the alias-oracle which marks every connected component as a clique is generally very imprecise and hence advanced algorithms for solving ECC are not needed in the present context. A general counter-example to such a reasoning is the large size of graphs, taken from real-world examples, containing a wide range of maximal cliques. Further, we also have specific counter-examples which show that exceptions to the above statement exist in real world. Figure 11 depicts the alias relation for a kernel in the GCC-testsuite extracted from an H.264 decoder. Each ellipse on the (i) graph represents a clique. A block-edge between two ellipses $X$ and $Y$ represents an edge between every pair of vertices in the set $\{X, Y\}$.

The optimal solution of ECC is shown on the (ii) graph. It has 4 cliques, each of which is a union of one of the smaller cliques with the biggest clique.
We now describe the polynomial algorithm that we chose to compute an ECC:
Input: an alias graph $G_a$
Output: a mapping from vertices to the alias-set representatives.
- Do a Depth First Search (DFS) on $G_a$ and separate out the connected components $A_1, A_2, \ldots, A_k$. This step takes $\max(|V(G_a)|, |E(G_a)|)$ time.
- Check if each connected component $A_i$ is a clique or not. This step takes $|V(A_i)|^2$ time, where $V(A_i)$ is the number of nodes of the connected component $A_i$.
If $A_i$ is small enough ($|V| < 5 \lor |E| < 10$), then search for solution using a direct search.
Apply the simple and cheaper data-reduction rules (mainly Rule 1 and Rule 2), as explained in [15], so that the problem size could be reduced.
Currently, we are using DFS based numbering, testing for cliques, and simple search. Though this method is very conservative, it gives the optimal solution for most of the cases for SPEC Cpu 2006, though not for the H.264 example shown above. For this example, this solution returns that all edges are in the same cover. Thus our current method is not searching for a clique, which leads to loss of precision, and needs to be improved. We are working on another heuristic approach to design a polyhedral algorithm computing a suboptimal ECC but without loss of points-to information.
6 Conclusion
We presented the design of the GRAPHITE pass of GCC, focusing on the challenges and novel research issues arising from this confrontation of polyhedral compilation with the real world. Our work makes the following contributions:
- We implemented the polyhedral model on a three-address, SSA-based representation, opening interesting reuse and interaction opportunities for analyses and optimizations in production compilers.
- We extended the polyhedral representation to capture alias relations among pointer-based data references, with no impact on polyhedral dependence analysis and transformation algorithms.
- We also extended this representation to capture scalar dependences and reductions.
- We set the framework for aggregating statements into “polyhedral basic blocks” or splitting those blocks into smaller components, with the ability to trade expressiveness for compilation time.
- We motivated further research on the practical interaction between polyhedral loop transformations and other optimizations, including parallelization and vectorization.
6.1 Prospective work
There are two main issues that are the focus of the prospective work on automatic parallelization:
- **Heuristics/cost model for automatic parallelization.** Currently, a very simple method is used to determine whether it is profitable to parallelize a certain loop. We need a good model to determine if we should parallelize a loop considering performance reasons.
- **Advanced automatic parallelization.** Currently we are only able to detect whether a loop is parallel or not. We would like to explicitly apply transformations to increase and expose further parallelism opportunities, and we have shown that GRAPHITE is the right setting to design such transformations.
Acknowledgments. Tobias Grosser was supported by AMD as a summer intern and by a Google Summer of Code grant. Konrad Trifunovic was supported by IBM and the HiPEAC FP7 European network as a summer intern. Li Feng was supported by a Google Summer of Code grant. GRAPHITE was partially supported by the ACOTES FP6 European project.
References
|
{"Source-Url": "http://ctuning.org/dissemination/grow10-01.pdf", "len_cl100k_base": 9068, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 48303, "total-output-tokens": 12205, "length": "2e13", "weborganizer": {"__label__adult": 0.0003514289855957031, "__label__art_design": 0.0003223419189453125, "__label__crime_law": 0.00031685829162597656, "__label__education_jobs": 0.00035762786865234375, "__label__entertainment": 5.239248275756836e-05, "__label__fashion_beauty": 0.00016129016876220703, "__label__finance_business": 0.000194549560546875, "__label__food_dining": 0.0003724098205566406, "__label__games": 0.0005869865417480469, "__label__hardware": 0.0016679763793945312, "__label__health": 0.0004992485046386719, "__label__history": 0.0002734661102294922, "__label__home_hobbies": 9.959936141967772e-05, "__label__industrial": 0.0005559921264648438, "__label__literature": 0.000179290771484375, "__label__politics": 0.00030875205993652344, "__label__religion": 0.0005984306335449219, "__label__science_tech": 0.024383544921875, "__label__social_life": 5.823373794555664e-05, "__label__software": 0.00460052490234375, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.0003614425659179687, "__label__transportation": 0.0006322860717773438, "__label__travel": 0.000244140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47958, 0.02732]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47958, 0.67492]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47958, 0.8724]], "google_gemma-3-12b-it_contains_pii": [[0, 1141, false], [1141, 4777, null], [4777, 6551, null], [6551, 7925, null], [7925, 11854, null], [11854, 15788, null], [15788, 19193, null], [19193, 22778, null], [22778, 26052, null], [26052, 27996, null], [27996, 31551, null], [31551, 35422, null], [35422, 38047, null], [38047, 40984, null], [40984, 44560, null], [44560, 47958, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1141, true], [1141, 4777, null], [4777, 6551, null], [6551, 7925, null], [7925, 11854, null], [11854, 15788, null], [15788, 19193, null], [19193, 22778, null], [22778, 26052, null], [26052, 27996, null], [27996, 31551, null], [31551, 35422, null], [35422, 38047, null], [38047, 40984, null], [40984, 44560, null], [44560, 47958, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47958, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47958, null]], "pdf_page_numbers": [[0, 1141, 1], [1141, 4777, 2], [4777, 6551, 3], [6551, 7925, 4], [7925, 11854, 5], [11854, 15788, 6], [15788, 19193, 7], [19193, 22778, 8], [22778, 26052, 9], [26052, 27996, 10], [27996, 31551, 11], [31551, 35422, 12], [35422, 38047, 13], [38047, 40984, 14], [40984, 44560, 15], [44560, 47958, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47958, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9433fbcf19de19c69728084051695c7433b9f9ce
|
Abstract
This paper introduces AHA, an NWO-funded 1) $344K$ Euro research project involving research into an amortized analysis of heap-space usage by functional and imperative programs. Amortized analysis is a promising technique that can significantly improve on simply summing worst case bounds. The project seeks to apply this technique to obtain non-linear bounds on heap-space usage for lazy functional languages and to adapt the results for imperative languages.
1 INTRODUCTION
Estimating heap consumption is an active research area as it becomes more and more an issue in many applications. Examples include programming for small devices, e.g. smart cards, mobile phones, embedded systems and distributed computing, e.g. GRID computing. The standard technique for estimating heap consumption gives unrealistically high bounds in many cases. As a consequence, in practice amounts of heap are used that are unnecessarily expensive and for small devices highly unpractical. A more accurate analysis is wanted for these cases in particular and for high integrity real-time applications in general.
A promising technique to obtain accurate bounds of resource consumption and gain is amortized analysis. The amortized analysis of a resource considers not the worst case of a single operation but the worst case of a sequence of operations. The overall amortized cost of a sequence is calculated by taking into account both the higher costs of one operation and the lower costs of another weighing them according to their distribution. In many cases amortized analysis can give much more accurate resource consumption estimates than the standard worst case analysis.
Combining amortization with type theory allows to check linear heap consumption bounds for functional programs with explicit memory deallocation. The AHA project aims to adapt this method to deal with non-linear bounds within (lazy)
functional programs as well as to transfer the results of the functional programming community to the imperative object-oriented programming world by applying the amortized method to derive accurate bounds for heap usage of Java programs. In this way the project both enhances fundamental theory and practical impact.
1.1 Relevance
Because memory exhaustion will invoke garbage collection, heap usage can indirectly slow down execution and hence influence time complexity. A better heap space analysis will therefore enable a more accurate estimation of time consumption. This is relevant for time-critical applications. Analyzing resource usage is also interesting for optimizations in compilers for functional languages, in particular of memory allocation and garbage collection techniques. A more accurate estimation of heap usage enables allocation of larger memory chunks beforehand instead of allocating memory cells separately when needed, leading to a better cache performance.
Resource usage is an important aspect of any safety or security policy, as exhausting available resources typically causes system failure. Indeed, it is one of the most important properties one wants to specify and verify for Java programs meant to be executed on (embedded) Java-enabled devices with limited amounts of memory, such as smart-cards implementing the Java Card platform, and MIDP mobile phones implementing the Java 2 Micro Edition (J2ME) platform.
The Java Programming Language (JML) already provides some rudimentary possibilities for specifying resource usage of Java programs. However, there is only syntax for specifying this, without any clear semantics, and there are no tools for actually monitoring – let alone, proving – that such constraints are met.
1.2 Research questions
The AHA project investigates the possibilities for analyzing heap usage for both functional and imperative object-oriented languages, more specifically, Clean and Java. It aims to answer the following research questions:
- It is clear, that the heap analysis for functional languages can be improved so that a wider class of resource usage bounds than just linear bounds can be guaranteed. The question is how complex the type-checking and inference procedures may be. In particular, which arithmetics and constraint solvers will be needed and for which classes of programs?
- Can heap space analysis be done for lazy functional languages?
Heap space analysis for lazy functional languages is clearly more complicated than for strict languages, because the heap space is also used for unevaluated expressions (closures). The amount of memory that is used at a certain moment depends on the evaluation order of expressions, which in its turn is influenced by the strictness analyzer in the code generating compiler.
• How successfully can one adopt the approach for object-oriented imperative languages? The aim here is to be able to prove – or, better still, derive – properties about the heap space consumption for Java programs specified in an extension of JML (Java Modeling Language).
2 INTRODUCTION TO AMORTIZATION
2.1 Amortization of resources in program analysis
The term “amortization” came to computer science from the financial world. There it denotes a process of ending a debt by regular payments into a special fund. In computer science amortization is used to estimate time and heap consumption by programs. “Payments” in a program are done by its operations or the data structures that participate in the computation, see [14]. These payments must cover the overall resource usage. Methods of distribution of such “payments” across operations or data structures form the subject of amortized analysis.
To begin with, consider amortized time costing. Given a sequence of operations, one often wants to know not the costs of the individual operations, but the cost of the entire sequence. One assigns to an operation an amortized cost, which can be greater or less than its actual cost. All one is interested in, is that the sum of the amortized costs is large enough to cover the overall time usage. Thus, one redistributes the run time of the entire sequence over the operations. The simplest way to arrange such redistribution is to assign to each operation the average cost $T(n)/n$, where $T(n)$ is the overall run time, and $n$ is the amount of operations. A rich operation is an operation for which its amortized cost, say, $T(n)/n$, exceeds its actual cost. Rich operations pay for “poor” ones.
Consider the Haskell-style version of the function multipop from [8] that, given a stack $S$ and a counter $k$, pops an element from the top of the stack till the stack is empty or the counter is zero:
```haskell
multipop :: Int -> Stack Int -> Stack Int
multipop k [] = []
multipop 0 (x:xs) = x:xs
multipop k (x:xs) = multipop (k-1) xs
```
If the actual costs of push and pop are 1 each, then the actual cost of the entire program is $\min(s, k)$, where $s$ is the size of the stack $S$. Assigning amortized costs one may think in the following way. Each operation push has the actual cost 1, but it “takes care” about the future of the element it pushes on the stack. This element may be popped out. So push obtains the amortized cost 2 to pay for itself and for the possible call of pop. Thus, all possible calls of pop are payed while constructing the input $S$ for multipop. After construction of the stack $S$, the amortized cost for pop is 0, and the amortized cost of multipop, which is the sum of the amortized costs of pops, is zero as well. The amortized cost of the construction of $S$ followed by multipop is $2s$, whereas its actual cost is $s + \min(s, k)$.
The correctness of an amortized analysis for a sequence of \( n \) operations is defined by
\[
\sum_{i=1}^{j} a_i \geq \sum_{i=1}^{j} t_i,
\]
where \( j \leq n \), \( a_i \) is the amortized cost of the \( i \)th operation, and \( t_i \) is its actual cost. In this way one ensures that, at any moment of the computation, the overall amortized cost covers the overall actual cost.
### 2.2 Views to Amortization
A general understanding of amortization [16] is based on a graph presentation of programs. A program is viewed as a directed graph with states (i.e. data structures) as nodes and edges (i.e. basic operators or constructs) as transitions between them. A possible computation is a path in the graph.
Branching in the graph appears due to non-determinism or due to the fact that states may be abstract. In other words, states may represent not concrete operational states like memory layouts, but their “projections”. When concrete information is lost, the if-then-else construct is presented in the graph by branching.
In a physicist’s view of amortization one assigns to any state \( s \) a real number \( \Phi(s) \) called the potential of the state \( s \). For the time being we consider only non-negative potentials. The first intuition behind the potential function is that it reflects the amount of resources (heap units, time ticks) that may be discharged during a computation, starting from the state \( s \). In the physicist’s approach the amortized cost of an any path between some \( s \) and \( s' \) is the difference \( \Phi(s') - \Phi(s) \).
To introduce a banker’s view we first note the following. Each edge \( e(s_1, s_2) \) has its actual cost \( t(s_1, s_2) \) defined by the corresponding basic command or the construct. Let it have an amortized cost \( a(s_1, s_2) \). The difference \( a(s_1, s_2) - t(s_1, s_2) \) for the edge \( e(s_1, s_2) \) is called a surplus. If the difference \( a(s_1, s_2) - t(s_1, s_2) \) is positive, it is called a credit, it may be used to cover the actual costs of further computations. The actual/amortized cost of a path \( \pi \), between some \( s \) and \( s' \), is the sum of actual/amortized costs of edges. In principle, the costs of two paths \( \pi \) and \( \pi' \) between the same vertices may differ. If for any two states \( s \) and \( s' \) it holds that \( a(s, s') = t(s, s') + \Phi(s') - \Phi(s) \), then the analysis is called conservative.
It is clear that for any physicist’s view one can find a corresponding banker’s approach. The opposite transformation is more complicated. The banker’s approach is more general than the physicist’s one, because one consider particular paths, but not only their initial and end points. However, it has been shown [16] that for any banker’s amortization distribution \( a \) there is a “better” conservative distribution \( \alpha \) and a potential function \( \Phi \) for it, such that:
- \( \alpha(s, s') = t(s, s') + \Phi(s') - \Phi(s) \) (a conservative analysis),
- the new analysis has the same set of pluspoints as the old one. A pluspoint is a vertex for which the surpluses of all paths from the initial state to it are non-negative,
- \( \alpha(s_1, s_2) \leq a(s_1, s_2) \) for any edge \( e(s_1, s_2) \).
Thus, without loss of generality we restrict our attention to a conservative amortized analysis, where amortized costs depend only on the end and initial vertices, but not on a concrete path.
XVI-4
2.3 Amortization for Heap Consumption Gives Size of Live Data
Any data structure that exists during the computation of a function may be constructed either from heap units from the initially allocated units defined by the initial potential function or reused heap cells of initial data (for the language with destructive pattern matching).
If a heap management is performed via maintaining a free list, then the heap layouts before and after the computations are presented by the schema in Figure 1. One can see maintaining a free list as an ideal garbage collector: once a location is not used it is put on the top of the free list. A fresh cell is taken from the top of the free list. Thus, a potential function and a size of input data define an upper bound on the size of the live data at any moment of computation. In general, the physicist’s approach gives the following dependency:
\[
\text{size(input)} + \Phi_{\text{in}} = \text{size(data\_current)} + \Phi_{\text{current}} = \text{size(output)} + \Phi_{\text{out}}
\]
Below we discuss how amortization may be incorporated into a type system.
3 STATE OF THE ART: A TYPE SYSTEM FOR LINEAR BOUNDS ON THE SIZE OF LIVE DATA
One can implement a heap-aware amortized analysis via an annotated type system. In this section we consider an annotated type system that corresponds to a banker’s view. It was introduced by Hofmann and Jost [10] for linear bounds on heap consumption. Given a first-order program this system allows to infer an upper bound (if it exists and is linear) on the amount of freshly-allocated heap units. More precisely, the size of live data during a run of the program will never exceed the size
of the input plus the inferred linear function. The operations that effect heap consumption are constructors and pattern matching. The coefficients of linear bounds appear in the form of numerical annotations (constants) for types. For instance, a program that creates a fresh copy of a list of integers
```haskell
copy :: [Int] → [Int]
copy [] = []
copy (x:xs) = x : copy xs
```
has the annotated signature \( L(\text{Int}, 1), 0 \rightarrow L(\text{Int}, 0), 0 \). It means that each element of an input list must be supplied with 1 extra heap unit to fix the space for its copy. The connection with the banker’s view is obvious: the annotations, attached to constructors, play a role of credits. The potential of a list \( L(\text{Int}, 1) \) of length \( n \) is \( k \cdot n \). The heap consumption by a program \( pf \) with the signature \( L(\text{Int}, k), k_0 \rightarrow L(\text{Int}, k'), k_0' \) does not exceed \( k \cdot n + k_0 \) heap units, and at the end of the computation at least \( k' \cdot n' + k_0' \) heap units are available, with \( n \) and \( n' \) be the sizes of the input and output lists, respectively.
In fact, the type system above inferred two (linear) potential functions of a given program: the potential of an input and the potential of an output. The potential of the input may be discharged during the run of the program, and the potential of the output may be used in further computations.
It is possible to extend this approach for non-linear bounds. The aim of the presented project is to study such extensions.
4 PROJECT PLAN
To answer the three research questions posed in Section 1.3, the project is partitioned into an initial step followed by two parallel research lines. The initial step serves as a pre-requisite for the two lines and will establish the foundations of amortized analysis with non linear bounds for strict languages. After that, a fundamental theoretical research line will extended this analysis to a lazy language. A parallel practical line will transfer the theoretical results to a more practical imperative object-oriented setting.
Ultimately, we want to implement the type systems for heap space usage to obtain an implementation that can check whether a given program, possibly with some type-annotations, meets a given bound on heap space usage or an implementation that can actually compute such a bound.
4.1 Amortized analysis with non-linear bounds
There are many interesting examples that require non-linear heap space, for instance matrix multiplication and Cartesian products. Also the generation of a sports competition programme, in which every team plays a home and an away match against every other team, needs a non-linear amount of heap space.
In all of these examples, the size of the output is also non-linear when expressed in terms of the input: the sports competition has $n \cdot n - n$ matches, where $n$ is the number of teams. Therefore, it is clear that the size relations are important when computing amortized bounds.
**Methodology.** On the theoretical level (without implementing it for an actual programming language) we will tackle the derivation of size relations separately from heap-space usage to keep both systems as simple as possible. The results from the system that derives the size relations serves as input for the amortized analysis. The amortized analysis will be an extension of the existing linear analysis [10].
### 4.2 Amortized Heap Analysis of a Lazy Language
An amortized time analysis for call-by-need languages is considered in [14]. Instead of credits it uses *debts* to cover costs of *closures* (suspensions). A closure is allowed to be forced only after its debt is “paid off” by the operations preceding the operation which forces the closure.
**Choice of Programming Language.** To consider heap usage analysis for lazy functional programming languages, we will begin with a strict version of core-Clean. We have chosen Clean since Clean’s uniqueness typing [3] makes Clean more suited as a starting point than e.g. Haskell, since with uniqueness typing reuse of nodes can be analysed in a sophisticated manner. For this strict Core-Clean language we will define an alternative operational semantics which will take heap usage into account, and then formulate a type system in which annotations in types express costs.
**Methodology.** Camelot [13] is an ML-like strict first order functional language with polymorphism and algebraic data types. To enable analysis of heap usage Camelot makes a syntactic distinction between destructive and non-destructive pattern matchings, where destructive pattern matching allows a node of heap space to be reclaimed; it is expected to be relatively easy to transfer such a distinction to a language that has uniqueness typing, as this can enforce the safe use of destructive pattern matching. Therefore, we expect that the results achieved for Camelot will be quickly transferred to the strict version of core-Clean. Then, we will make incremental changes, by changing the strict semantics into a mixed lazy/strict semantics and then investigate the effect on the operational semantics and the type system. This is not a big step in the dark since the heap-aware inference system from [10] already has some flavor of the call-by-need semantics. *Shared* usage of variables by several expressions is treated, for instance, in the MATCH-rule given below in Section 5.3 and in the SHARE-rule in [10].
---
2Following [14] we associate *call-by-value* with strict languages, *call-by-name* with lazy languages without memoization, and *call-by-need* with lazy languages with memoization.
4.3 Adaptation to Object-Orientation
Choice of Programming Language. As the object-oriented programming languages to be studied we have chosen Java. We will use the Java semantics developed in the LOOP project [11], which includes an explicit formalisation of the heap. This will first require accurately accounting of heap usage in the type-theoretic memory model underlying the LOOP tool [5].
The Java Modeling Language JML, a specification language tailored to Java, already provides a syntax for specifying heap usage, but this part of JML is as yet without any clear semantics. We want to provide a rigorous semantics for these properties about heap space usage and then develop an associated programming logic for proving such properties.
Methodology. We will start to adjust the analysis of Section 5.3 by applying it to classes that admit a functional algebraic data-type (ADT) interface. These classes can be considered as defining a number of operations on an algebraic data type. One extracts basic imperative routines which contain explicit allocation/deallocation, and correspond to (co)algebraic operations, and have functional counterparts, like data constructors and pattern matching(s). A field assignment, for example, may be presented as a composition of the destructive match and a constructor. Heap-aware typing judgments must be defined for these macro-operations and the language constructs like if-branching, sequencing and while-repetition.
Next, research will be done to alleviate the restrictions that are set upon the classes in order to make the analysis applicable. For that purpose, we will investigate the possibility of introducing amortized variants of existing specific analyses (such as the non-recursive [6] and the symbolic [7] which treats aliasing).
One of the main problems for heap space analysis will be aliasing. Aliasing-aware type systems and logics presented in [1, 12] may be considered separately from the resource-aware typing system and are to be combined with it at the very last stage of the design of the proof system.
5 FIRST STEPS
5.1 Towards non-linear upper bounds on the size of live data
It is convenient to measure the potentials of data structures in terms of the sizes. For instance, for a list of length $n$ its potential may be a function of $n$, that is $\Phi(n)$. In general one assigns a potential to an overall data structure. In other words, a potential is assigned to the abstract state that is the collection of the sizes of the structures existing in a given concrete state. Now, consider a program that creates the initial table from a list of $n$ rows and a list of $m$ columns. For its input of type $L_n$$(\text{string}) \times L_m$(\text{string}) the potential $\Phi(n, m)$ should be proportional to $n \cdot m$.
The banker’s view is reduced to assigning a credit to each constructor of a data structure. For instance, in [10] each constructor of a list of type $L(\alpha, k)$ has a constant credit $k$, and thus the potential of the list is $k \cdot n$, where $n$ is its length.
In general the credit of a node may be a function. It may depend on the position of the node in the list, and/or on the size of the list, as well as on the size of “neighboring” data structure, etc. For instance, in the table-creating program the annotated type of its input may be $L_n(\text{Int}, k) \times L_m(\text{Int}, 0)$, where $k(\text{position}, n, m) = m$.
In the linear heap-consumption analysis [10] these dependencies were not taken into account. This make the analysis very simple, because it reduces to solving linear inequalities. It covers a large class of functional programs with linear heap consumption, where coefficients of linear functions are credits of constructors.
Introducing dependencies will significantly increase the complexity of type checking and inference. We study classes of programs for which type checking and inference of non-linear bounds are decidable.
5.2 Examples: going on with amortization-and-types
The linear heap-consumption analysis shows that amortization and types suit each other. In this section we consider examples one of which illustrates the advantages of their combination and the other one motivates study of annotated types for non-linear heap consumption.
5.2.1 Type systems bring modularity to amortized analysis
In the following example the naive worst-case analysis significantly overestimate the real heap consumption and the precise analysis is rather complicated. We show that with the help of types annotated with credits, one obtains a very good upper bound for a “reduced price”: types make the analysis modular and, thus, simpler and more suitable for automated checking or inference.
Consider queues (“first-in-first-out” lists) presented as pairs of lists in the usual way. A queue $q$ is represented by a pair $(xs, ys)$, such that $q$ is $xs += (\text{reverse} \ ys)$. The head of the list $xs$ is the head of the queue, and the head of $ys$ is the tail of the queue. For instance, the queue $[1, 2, 3, 4, 5]$ may be presented as $([1, 2, 3, 4, 5])$. One adds elements to the queue by pushing them on the head of $ys$, see Figure 2 below. After adding 6 the resulting queue is presented by $([6, 5, 4])$. The function “remove from the queue”, will pop 1 from $xs$. Consider the code for remove where reverse creates a fresh copy of the reversed list:
```haskell
remove :: ([Int], [Int]) -> (Int, ([Int], [Int]))
remove ([], []) = error
remove ([], ys) = remove (reverse ys, [])
remove (x:xs, ys) = (x, (xs, ys))
```
We assume that input pairs and output triples are not boxed, that is, two input pointer values are taken from the operand stack and in the case of normal termination three values will be pushed on the operand stack. (This helps to avoid technical overhead with heap consumption for pairs and triples creation.)
Let \( n \) denote the length of `remove`'s first argument and \( m \) denote the length of the second argument. If \( n = 0 \), then `remove` consumes \( m \) heap cells, otherwise `remove` does not consume at all.
Annotate the function type in the spirit of Physicist's point of view:
\[
L_n(\text{Int}) \times L_m(\text{Int}), \Phi \longrightarrow \text{Int} \times L_{p_1}(\text{Int}) \times L_{p_2}(\text{Int}), \Phi'
\]
where \( p_1 = m - 1 \) and \( p_2 = 0 \) if \( n = 0 \), and \( p_1 = n - 1 \) and \( p_2 = m \) if \( n > 0 \). \( \Phi \) denotes the potential of input data before the computations, and \( \Phi' \) denotes the potential of the data after the computation. Clearly, \( \Phi' = \Phi - m \) if \( n = 0 \), and \( \Phi' = \Phi \) if \( n > 0 \). The drawback of this presentation is that \( \Phi' \) is defined piecewise.
The typing corresponding to the banker's view is more elegant, since it allows to escape piecewise definitions for amortization:
\[
L_n(\text{Int}, 0) \times L_m(\text{Int}, 1), 0 \longrightarrow \text{Int} \times L_{p_1}(\text{Int}, 0) \times L_{p_2}(\text{Int}, 1), 0
\]
Indeed, if \( n = 0 \) then the potential of the second argument \( 1 \cdot m \) is spent by `reverse`, \( p_2 = 0 \) and the potential of the second list on the r.h.s. is \( 0 = 0 \cdot p_2 \). If \( n > 0 \) then the potential of the second argument \( 1 \cdot m \) is not spent, and the second argument is intact, \( p_2 = m \) and the potential of the second list on the r.h.s. is \( 1 \cdot p_2 \). So, amortization keeps track on the resources that are left after computation and may be used afterwords. The effect of combination of amortization with types may be seen on composition of `remove` and `copy3`:
\[
\text{Int} \times L_n(\text{Int}, 0) \times L_m(\text{Int}, 1), 0 \longrightarrow L_m(\text{Int}, 0), 0
\]
that returns a fresh copy of the third argument.
The naive worst-case analysis consists in summation of two worst-case heap consumption estimations: for `remove` it is \( m \), and for `copy3` it is \( m \). So, the naive worst-case analysis is \( 2 \cdot m \).
The precise worst-case analysis means detailed abstract program analysis of the entire composition and leads to piece-wise definition of the consumption function, which is later simplified to a linear function \( p(n, m) = m \).
The type of `copy3 (remove (-))` is easily obtained by composition:
\[
\text{In t} \times \text{Ln}(\text{Int}, 0) \times \text{Lm}(\text{Int}, 1), 0 \rightarrow \text{Lm}(\text{Int}, 0), 0.
\]
It means that the composition consumes $1 \cdot m$ heap units. Some program analysis has been done here as well but only on the level of `remove`, to obtain its type. This is done once and forever and the type is applicable for any other composition.
5.2.2 Nonlinear bounds
In this subsection we consider an example to illustrate which extensions we should be able to cover.
Consider the program that given two lists of strings, of length $n$ and $m$ respectively creates the initial $n \times m$ table of pairs of integer numbers, filled with $(0, 0)$. This program is used for creating the initial table for a tournament, like a round in a soccer championship.
We start with a comment on how the initial table is used. During a round, each team plays two games – at home and as a guest. Let, for instance, “Ajax” which is number 1 in the list (in the alphabetical order), plays in Amsterdam with “Feyenoord”, number 3. The result is 2–1. One places $(2, 1)$ in the position $(1, 3)$ in the table. Let “Feyenoord” play in Rotterdam with “Ajax”, 1–1. One places $(1, 1)$ in the position $(3, 1)$. At the end of the round the table, except the diagonal, is filled with the results.
We need the auxiliary function:
\[
\text{init\_row} :: [\text{String}] \rightarrow \{(\text{Int}, \text{Int})\}
\]
\[
\text{init\_row} \ [\] \ = \ [\]
\]
\[
\text{init\_row} \ (h \ : \ t) \ = \ (0, 0) : \text{init\_row} \ t
\]
Its annotated type is $\text{Ln}(\text{String}, 2), 0 \rightarrow \text{Ln}(\text{Int} \times \text{Int}, 0), 0$. Note, that here pairs of integers are allocated in the heap, and we assume that a pair allocate one heap unit, as well as a cons-cell.
The main “working” function is:
\[
\text{init\_table} : : [(\text{Int}, \text{Int})] \rightarrow [\text{String}] \rightarrow [[(\text{Int}, \text{Int})]]
\]
\[
\text{init\_table} \ \text{row} \ [\] \ = \ [\]
\]
\[
\text{init\_table} \ \text{row} \ (h \ : \ t) \ = \ \text{copy\_row} \ : \text{init\_table} \ t
\]
with type $\text{Ln}(\text{Int} \times \text{Int}, 2m) \times \text{Lm}(\text{String}, 0), 0 \rightarrow \text{Lm}(\text{Ln}(\text{Int} \times \text{Int}, 0), 0), 0$.
The function that creates the initial tournament table
\[
\text{init\_tour} :: [\text{String}] \rightarrow [[[\text{Int}, \text{Int}]])
\]
\[
\text{init\_tour} \ \text{teams} \ = \ \text{init\_table} \ (\text{init\_row} \ \text{teams}) \ \text{teams}
\]
has the annotated type $\text{Ln}(\text{String}, 2n + 2), 0 \rightarrow \text{Ln}(\text{Ln}(\text{Int} \times \text{Int}, 0), 0), 0$.
XVI-11
5.3 Experimental Type System
We start with a type system for a first-order call-by-value functional language over integers and polymorphic lists. So far, we consider shapely programs, that is programs for which the size of the output (polynomially) depends on the sizes of input lists.
5.3.1 Language and Types
The abstract syntax of the language is defined by the following grammar, where \( t \) ranges over integer constants, \( x \) and \( y \) denote zero-order program variables, and \( f \) denotes a function name:
\[
\text{Basic} \quad b ::= c \mid \text{nil} \mid \text{cons}(x, y) \mid f(x_1, \ldots, x_n)
\]
\[
\text{Expr} \quad e ::= \text{letfun} \ f(x_1, \ldots, x_n) = e_1 \ in \ e_i
\]
\[
\text{let} \ x = b \ in \ e \mid \text{if} \ x \ then \ e_1 \ else \ e_2
\]
\[
\text{match} \ x \ with \ n \ nil \Rightarrow e_1 \mid \text{cons}(x_{hd}, x_{tl}) \Rightarrow e_2
\]
We have been studying a type and effect system in which types are annotated with size expressions (lowercase indices) and credit functions.
Size expressions that annotate types (see below) are polynomials representing lengths of finite lists and arithmetic operations over these lengths:
\[
\text{SizeExpr} \quad p ::= \text{IN} \mid n \mid p + p \mid p - p \mid p * p
\]
where \( n \), possibly decorated, denotes a size variable, which ranges over integer numbers. Semantics for lists with negative sizes is not defined: these lists are ill-formed.
In the simplest case, the intuition behind a credit function \( k : \text{IN} \rightarrow \mathbb{R}^+ \) is that \( k(i) \) is the credit, that is, an amount of the free heap units, assigned to the \( i \)-th cons-cell of a given list. Note that we count cons-cells from nil, that is the head of a list of length \( n \). Fractional credits are used to achieve more flexibility in defining distribution of extra heap cells across an overall data structure.
As we have noticed in 5.1 credits may depend not only on the position of a cons-cell, but also on other parameters, like the length of the outer list or the sizes of “neighboring” lists. In general a credit function is of type \( \text{IN} \times \ldots \times \text{IN} \rightarrow (\text{IN} \rightarrow \mathbb{R}^+) \). The symbol \( k \) denotes a parametric or non-parametric credit function.
Zero-order types are assigned to program values, which are integers and annotated finite lists:
\[
\text{Types} \quad \tau ::= \text{Int} \mid \alpha \mid L_{p}(\tau, k) \quad \alpha \in \text{TypeVar}
\]
where \( \alpha \), possibly decorated, is a type variable. For now, lists must have size expressions at every position in the type. Hence, they represent matrix-like structures.
First-order types are assigned to shapely functions over values of a zero-order type. Let \( \tau^0 \) denote a zero-order type where all the size annotations are size variables. First-order types are defined by:
\[
\tau^f ::= \tau_i^0 \times \ldots \times \tau_j^0, K \rightarrow \tau_{l+1}, K'
\]
where the free size variables of the annotations of \( \tau_{i+1} \) are the size variables of the input type, and \( K, K' \) are non-negative rational constants.
### 5.3.2 Typing rules
Consider a type system that generalises the type system of Hofmann and Jost [10].
A typing judgment is a relation of the form \( \Gamma; D; T; K \vdash e : \tau; K' \), where \( D \) is a set of Diophantine equations and disequations, which is used to keep track of the size information. The signature \( \Sigma \) contains the type assumptions for the functions that are going to be checked.
In the typing rules, \( \Gamma; D \vdash p = p' \) means that \( p = p' \) is derivable from \( D \) in first-order logic. \( \Gamma; D \vdash \tau = \tau' \) is a shorthand that means that \( \tau \) and \( \tau' \) have the same underlying type and equality of their credit and size annotations is derivable. Consider some of the typing rules that define a typing judgment relation formally.
\[
D;\Gamma;\text{hd}:\tau, k, \text{tl}:L_p(\tau, k); K \vdash e_{\text{cons}} : t; K' \quad \text{CONS}
\]
The non-destructive pattern-matching rule takes into account that the list and its tail are shared and, therefore, they share the potential. In the simplified version below all, but the head-cell’s, potential is transferred to the tail. The head-cell’s credit is “opened” for usage:
\[
\Gamma; D; \text{hd}:\tau', x: L_p(\tau', k); K \vdash e_{\text{match}}(\text{match } x \text{ with } \text{nil} \Rightarrow e_{\text{nil}}; \text{match } \text{cons}(\text{hd, tl}) \Rightarrow e_{\text{cons}}) : \tau; K' \quad \text{MATCH}
\]
Function application rule for a function \( f \) may be viewed as the generalisation of the \text{CONS}-rule with \( f \) instead of cons, the function’s arguments instead of \( \text{hd}, \text{tl} \). Note that in the precondition there must be the information \( \Sigma(f) \) about the type of the function. In this way one achieves the finiteness of the derivation tree if the function is recursive. The information may be not complete, that is the type has unknown parameters in annotations. Type inference for the annotated types consists in finding these parameters.
To deal with inter-structural exchange of resources, one needs rules like
\[
D; \Gamma; x: L_p(\tau, k); K \vdash e : \tau; K' \quad \text{SHUFFLE IN}
\]
This rule is non-syntax driven and increases complexity of type-checking. We plan to establish conditions that define how such inference rules must be applied.
5.4 Ongoing Research: Sized Types
Whilst exploring possible research directions, it became clear that an important aspect of any advanced amortized analysis is static derivation of the sizes of data structures. More specifically, the relation between the sizes of the argument and the size of the result of a function has to be known. The size of a data structure, for now, is the number of nodes it consists of.
We have studied the pure size-aware type system, which is obtained from the presented one by erasing credit functions and resource constants $K$. We have shown, that in general type-checking for this system is undecidable. Indeed, consider the matching rule. In its nil-branch it contains the Diophantine equation that reflects the fact that the list is empty. At the end of type checking one may need to determine if a branch is going to be entered or not. To check this the Diophantine equations have to be solved. So, type-checking is reducible to Hilbert’s tenth problem and, thus, undecidable in general [17].
However, we have formulated the syntactical restriction that makes the solving of the equations trivially decidable: let-expressions are not allowed to contain pattern matching as a sub-expression.
It is not known whether type inference is decidable for the size-aware system. It amounts to solving systems of polynomial equations that may be non-linear [17].
So, to infer types we propose an altogether different approach [17]. The idea is simple. First, note that the size dependencies are exact and polynomial. From interpolation theory it is known that any polynomial of finite degree is determined by a finite number of data points. Hence, if a degree of the polynomial is assumed and enough pairs of input-output sizes are measured by running the program on test-data a hypothesis for the size equations can be determined. If size dependency has indeed the degree assumed, checking the hypothesis in the type system gives a positive result. By repeating the process for increasing degrees, any polynomial dependency will eventually be found, if it exists. In case it does not exist, or the program does not terminate, the procedure does not terminate. So, we named this weak type inference.
A further development of this system would, amongst others, include an adaptation to upper and lower bounds and support for higher-order functions.
6 RELATED WORK
The presented combination of amortization and types generalizes the approach from [10] which forms the foundational basis of the EU funded project Mobile Resource Guarantees, [15]. The project has developed the infrastructure needed to endow mobile code with independently verifiable certificates describing its resource behavior (space, time). Its functional language Camelot is an implementation of the underlying language from [10]. Numerical annotations are not part of its typing, they are computed later, on top of a standard type-inference procedure. A Camelot program is compiled into Grail, which a structured version
of the Java Byte Code. The high-level type system is mirrored in a specialized heap-aware Hoare logic for the byte-code.
The AHA project can be considered as one of the successors of MRG:
- it is aimed to extend the high-level type system of MRG to type-systems for non-linear heap consumption bounds,
- applications of the methodology to object-oriented programming will involve MRG experience with the byte-code: one considers imperative object-oriented structures that have counterparts in functional programming,
- soundness of the type systems, type-checking and inferences procedures, object-oriented extensions will be implemented in an environment similar to the program-logic environment designed for MRG.
MRG has a few other successors. First, one should mention a large consortium Mobius [4], which, as well as MRG, runs under EU framework Global Computing. Its aim is to design a byte-code verification tool that allows to employ a large variety of formal methods. The byte-code properties of interest include information flows and resource consumption.
The aims of the EmBounded project [9] are to identify, to quantify and to certify resource-bounded code in Hume, a domain-specific high-level programming language for real-time embedded systems. The project develops static analyses for time and space consumption, involving size and effect type systems. The foundational results have realistic applications for embedded systems.
The ReQueSt project [2], funded by UK government’s agency EPSRC, aims to prevent the situations when, for instance, an expensive user’s request fails due to the lack of memory.
7 CONCLUSION
The AHA project aims to improve the state of the art in inferring upper bounds on heap-space usage. Improvements lie in the complexity of the bounds and the applicability to widely used languages. Ultimately, we want to implement both a type checking and a type inference system for heap space usage bounds of lazy functional and imperative programs.
REFERENCES
|
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/35243/35243.pdf?sequence=1", "len_cl100k_base": 9493, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 39483, "total-output-tokens": 11492, "length": "2e13", "weborganizer": {"__label__adult": 0.0005359649658203125, "__label__art_design": 0.00036025047302246094, "__label__crime_law": 0.0004911422729492188, "__label__education_jobs": 0.0006742477416992188, "__label__entertainment": 7.987022399902344e-05, "__label__fashion_beauty": 0.00022971630096435547, "__label__finance_business": 0.00028896331787109375, "__label__food_dining": 0.0004839897155761719, "__label__games": 0.0006322860717773438, "__label__hardware": 0.0012083053588867188, "__label__health": 0.0011119842529296875, "__label__history": 0.0003592967987060547, "__label__home_hobbies": 0.00013554096221923828, "__label__industrial": 0.0005421638488769531, "__label__literature": 0.00039839744567871094, "__label__politics": 0.0004050731658935547, "__label__religion": 0.0007309913635253906, "__label__science_tech": 0.0386962890625, "__label__social_life": 0.00013363361358642578, "__label__software": 0.0037841796875, "__label__software_dev": 0.947265625, "__label__sports_fitness": 0.0004754066467285156, "__label__transportation": 0.0008845329284667969, "__label__travel": 0.00026488304138183594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43070, 0.01608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43070, 0.50369]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43070, 0.8793]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1906, false], [1906, 4715, null], [4715, 7596, null], [7596, 11050, null], [11050, 12727, null], [12727, 15469, null], [15469, 18398, null], [18398, 21465, null], [21465, 24278, null], [24278, 26625, null], [26625, 29366, null], [29366, 32362, null], [32362, 34874, null], [34874, 37897, null], [37897, 40356, null], [40356, 43070, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1906, true], [1906, 4715, null], [4715, 7596, null], [7596, 11050, null], [11050, 12727, null], [12727, 15469, null], [15469, 18398, null], [18398, 21465, null], [21465, 24278, null], [24278, 26625, null], [26625, 29366, null], [29366, 32362, null], [32362, 34874, null], [34874, 37897, null], [37897, 40356, null], [40356, 43070, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43070, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43070, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1906, 2], [1906, 4715, 3], [4715, 7596, 4], [7596, 11050, 5], [11050, 12727, 6], [12727, 15469, 7], [15469, 18398, 8], [18398, 21465, 9], [21465, 24278, 10], [24278, 26625, 11], [26625, 29366, 12], [29366, 32362, 13], [32362, 34874, 14], [34874, 37897, 15], [37897, 40356, 16], [40356, 43070, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43070, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
090ac993e13939bde7b103518f36c1a119dc1278
|
[REMOVED]
|
{"Source-Url": "https://auspace.athabascau.ca/bitstream/handle/2149/3421/Lin,%20Oscar_PRIMA_49.pdf;sequence=1", "len_cl100k_base": 9286, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 54928, "total-output-tokens": 11593, "length": "2e13", "weborganizer": {"__label__adult": 0.0006198883056640625, "__label__art_design": 0.0018625259399414065, "__label__crime_law": 0.0008339881896972656, "__label__education_jobs": 0.2266845703125, "__label__entertainment": 0.0002282857894897461, "__label__fashion_beauty": 0.0004930496215820312, "__label__finance_business": 0.001796722412109375, "__label__food_dining": 0.000941753387451172, "__label__games": 0.0019741058349609375, "__label__hardware": 0.00199127197265625, "__label__health": 0.0015497207641601562, "__label__history": 0.0013885498046875, "__label__home_hobbies": 0.0004649162292480469, "__label__industrial": 0.0013761520385742188, "__label__literature": 0.001102447509765625, "__label__politics": 0.0009150505065917968, "__label__religion": 0.00099945068359375, "__label__science_tech": 0.194580078125, "__label__social_life": 0.0005145072937011719, "__label__software": 0.0259246826171875, "__label__software_dev": 0.53125, "__label__sports_fitness": 0.0005807876586914062, "__label__transportation": 0.001560211181640625, "__label__travel": 0.0005807876586914062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44696, 0.02446]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44696, 0.29064]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44696, 0.90082]], "google_gemma-3-12b-it_contains_pii": [[0, 2544, false], [2544, 5910, null], [5910, 9601, null], [9601, 12795, null], [12795, 14745, null], [14745, 17887, null], [17887, 19273, null], [19273, 22555, null], [22555, 26428, null], [26428, 29502, null], [29502, 33375, null], [33375, 34292, null], [34292, 36124, null], [36124, 38825, null], [38825, 42183, null], [42183, 44696, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2544, true], [2544, 5910, null], [5910, 9601, null], [9601, 12795, null], [12795, 14745, null], [14745, 17887, null], [17887, 19273, null], [19273, 22555, null], [22555, 26428, null], [26428, 29502, null], [29502, 33375, null], [33375, 34292, null], [34292, 36124, null], [36124, 38825, null], [38825, 42183, null], [42183, 44696, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44696, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44696, null]], "pdf_page_numbers": [[0, 2544, 1], [2544, 5910, 2], [5910, 9601, 3], [9601, 12795, 4], [12795, 14745, 5], [14745, 17887, 6], [17887, 19273, 7], [19273, 22555, 8], [22555, 26428, 9], [26428, 29502, 10], [29502, 33375, 11], [33375, 34292, 12], [34292, 36124, 13], [36124, 38825, 14], [38825, 42183, 15], [42183, 44696, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44696, 0.02688]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7ab41c71c7fc47d04e10d673c1fa76e4efc34550
|
IMPLEMENTATION OF LOG MANAGEMENT SOLUTION IN A MEDICINE DISPENSER ROBOT
Lyudmila Grigoryeva
IMPLEMENTATION OF LOG MANAGEMENT SOLUTION IN A MEDICINE DISPENSER ROBOT
System logs play a crucial role when it comes to tracking issues, monitoring data and diagnosing problems within a system. Eventually, the amount of log data generated by systems and devices exceeds the amount that can be managed manually, and companies start looking for a proper log management solution.
One of the goals of this thesis was to study log management and the necessary steps that need to be taken before log data can be used for further analysis. One of such steps is data preprocessing, the process of removing inconsistencies and errors in raw data. During this study, a data preprocessing prototype was built to transform the case company log files that contained unformatted and incorrect timestamps.
Another goal was to research the Elastic Stack and its components to determine how they meet the basic functionalities of a log management solution. This particular solution was chosen based on it being open-source, lightweight and flexible. The theory is backed by an Elastic Stack implementation, and the thesis supports the initial claim that the Elastic Stack is a suitable choice for a log management solution.
KEYWORDS:
Data preprocessing, Log Management, Elastic Stack
# TABLE OF CONTENTS
**LIST OF ABBREVIATIONS**
5
1 **INTRODUCTION**
6
2 **BACKGROUND**
8
2.1 Log files and logging
8
2.2 Log management
9
3 **DATA PREPROCESSING**
12
3.1 Preprocessing
12
4 **ELASTIC STACK**
18
4.1 The Elastic stack overview
19
4.2 Elasticsearch
19
4.3 Logstash
21
4.4 Kibana
24
4.5 Beats and Filebeat
25
4.6 X-Pack
26
4.7 Alerting
26
5 **ELASTIC STACK IMPLEMENTATION**
29
5.1 Prerequisites and preparations for implementation
29
5.2 Potential issues and drawbacks
35
6 **CONCLUSION**
37
**REFERENCES**
38
**APPENDICES**
Appendix 1. Source code for data preprocessing
PICTURES
Picture 1. Evondos service system architecture. (Evondos, 2019) 6
Picture 2. Common Log Format. 8
Picture 3. Syslog messages. 9
Picture 5. The Elastic Stack components. (Elastic.co, 2019) 19
Picture 6. Elasticsearch distribution. (Techartifact, 2016) 20
Picture 7. Logstash pipeline. (Dahlqvist, 2018) 22
TABLES
Table 1. Results of the optimization test. 33
# LIST OF ABBREVIATIONS
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>API</td>
<td>Application Programming Interface</td>
</tr>
<tr>
<td>CPU</td>
<td>Central Processing Unit</td>
</tr>
<tr>
<td>CSV</td>
<td>Comma-separated values</td>
</tr>
<tr>
<td>ISO</td>
<td>International Organization for Standardization</td>
</tr>
<tr>
<td>JVM</td>
<td>Java Virtual Machine</td>
</tr>
<tr>
<td>JSON</td>
<td>JavaScript Object Notation</td>
</tr>
<tr>
<td>RAM</td>
<td>Random-access memory</td>
</tr>
<tr>
<td>SSD</td>
<td>Solid-state drive</td>
</tr>
<tr>
<td>XML</td>
<td>Extensible Markup Language</td>
</tr>
</tbody>
</table>
1 INTRODUCTION
This thesis was commissioned by Evondos Ltd, a Finnish healthcare service company based in Salo. Outside of Finland, Evondos operates sales offices in Sweden, Norway and Denmark.
Evondos provides a digital health solution that addresses the issues in medicine distribution to patients. The system includes the Evondos® E300 Medicine Dispensing Robot and the Evondos® Telecare System (Picture 1):

Picture 1. Evondos service system architecture. (Evondos, 2019)
The Evondos system provides automatic medicine dispensation to patients with chronic conditions and long-term medication prescriptions. It supports patients’ well-being and quality of life by providing an effective service that:
- Ensures correct medication is delivered at the right time with the right dosage
- Ensures drug safety through remote monitoring of medication
- Reduces the need for routine home calls regarding taken medication
- Alleviates the pressure on care staff workers dealing with manual medication distribution
(Evondos, 2019)
The thesis was carried out for Evondos with the purpose of preparing a technical solution that will help the company set up a suitable and working log management solution.
The goals of the study were defined as to:
1. Investigate the process of log management
2. Determine how data should be preprocessed before being ingested into a log management system
3. Implement a log management solution
The first chapter of the thesis concentrates on the importance of logging, and explains what are log management and log analysis. The second chapter takes a look at data preprocessing and the different types of data wrangling that are performed during this step. The chapter focuses specifically on data cleansing and how it can be done through scripting. The third chapter of the thesis defines the important features of log management solutions that need to be considered when choosing a particular solution. The fourth chapter provides the theoretical background of the Elastic stack, which is implemented in section five. Chapter six concludes this work by summarizing what has been studied and achieved during this study.
2 BACKGROUND
2.1 Log files and logging
Log files are file extensions that store all the events that occur in a system or when running some software. These files consist of entries, where each entry contains information regarding what happens after the system is started and until it is shut down. The process of recording this information is known as logging. The purpose of logging is to keep track of all the events within a system in order to understand what is going on in the system and diagnose potential issues such as running out of disk space and many others. (Shields, 2017)
Log files are highly configurable and, generally, there are no strict rules regarding the size, format or location of the log. Different systems and applications use various approaches when it comes to logging and organizing log files. Some will have combined log file entries from different sources in one file, while others can have separate files for error and access logs. (Solarwinds Loggly, n.d.) However, in order to avoid software developers having to design their own ad hoc logging systems, some standard logging formats were introduced, such as the Common Log Format for server logs and syslog for Unix-based systems.
Although, as mentioned before, there are no standards when it comes to logging, most of log files contain the following fields (Picture 2, Picture 3):
1. Timestamp
2. Category or event classification
3. Descriptive message
(La rosa, 2018)

2.2 Log management
Generating and configuring log files can become problematic and less valuable when proper log management is missing. Log management is an important procedure that deals with the vast amount of log files that are being produced. Generally, log management includes the following:
1. Log generation
2. Log storage
3. Log analysis
4. Alerting and notification
(Kent & Murugiah, 2006)
2.2.1 Log generation and storage
Log generation involves the process of generating log files as well as determining the sources that perform logging. If multiple sources are responsible for generating log files, it is important to ensure the consistency between the different log files.
Log storage, depending on the requirements of the system, can become complicated and tricky. This part of log management requires determining how much log data should be stored and where. Log files can be stored locally and also sent to one or more servers. However, local storage is not always possible due to limited space in the log generator, so, often, logs are sent to a remote server. Storing log entries in a database is another option, which provides a storage format that can be helpful during log analysis. (Kent & Murugiah, 2006)
Log storage also involves log rotation, which is the process of closing a log file and opening the next one once the previous log file is considered complete. Basically log rotation determines how much data a single log file should contain. Logs can be rotated on a regular timely basis, e.g. hourly, daily, weekly or when the log files reach a certain
size, e.g. 1 megabyte, 10 megabytes, 100 megabytes. Log rotation can be implemented in the log generator or configured with a third-party utility. (Kent & Murugiah, 2006)
Another part of log storage is log normalization, which is the process of converting data to a single format. Log normalization ensures data analysis can be performed at ease, especially when there are multiple log sources in the system. Normalization, however, can be resource-consuming. (Kent & Murugiah, 2006)
Before logs are sent to a remote server, system administrators and software developers should also need to determine if log files need to be encrypted and if certain information needs to be protected. (Kent & Murugiah, 2006)
2.2.2 Log analysis
Log analysis is the process of reading, reviewing and studying log files. Log analysis helps developers and system administrators to troubleshoot issues in the OS or debug software applications. (Sumo Logic, n.d.)
Once it has been determined how log data is stored and collected from log generators, log analysis can take place. Traditionally, log files have been configured without built-in analysis tools. Nowadays, some log generators might provide limited analysis capabilities. Before log data can be analyzed, log files also need to be centralized if they are coming from multiple log generating sources. (Kent & Murugiah, 2006)
Log analysis can be performed using various functions and methods. The most common way to access log data across various sources is through searching. When building a log management system one of the important things is to set up a clean and responsive search interface. (Chuvakin, 2010) It can be implemented using a range of tools: from simple command-line utilities such as grep to third-party software that are powered by machine learning. Those machine learning algorithms can even provide predictive analysis beyond detecting security errors and solving everyday issues.
Indexing is one of the key components of a reliable and successful log management system that optimizes its performance (sometimes by a factor of a hundred in terms of speed). Indexing is done by creating an index, a data structure that allows for quick search queries across the log storage. Not all log management systems, however, support indexing. (Chuvakin, 2010)
One way to review data without diving too deep into log analysis and getting caught up in endless search queries is visualization. Visualizing logs gives the possibility to take a quick glance at issues and events in the system as well as its performance and availability through different graphs and dashboards. (Nguyen, 2018)

**Picture 4. Kibana visualization dashboard.** (Elastic.co, 2019e)
### 2.2.3 Alerting and notification
At the beginning of this chapter, logging was defined as an important process used for diagnosing issues and getting a better understanding of what is going on in the system. Parsing and indexing log files helps developers perform quick searches across thousands of lines of their log data. In order to avoid constant searching, a good option would be to set up some alerting and monitoring tool to watch out for specific events and trigger some kind of endpoint such as e-mail, alarm or Slack channel. (Solarwinds Loggly, 2019)
3 DATA PREPROCESSING
3.1 Preprocessing
Before any data can be processed and used for analysis, it is important to ensure the high quality of data in order to get the best results and predictions from the analysis. Some attributes that define data quality:
1. Accuracy
Accuracy in data refers to the amount of erroneous values in the results. Inaccurate data can occur in datasets due to human or computer error as well as someone deliberately adding incorrect or false data. Incorrect data formats and duplicate values can also lead to inaccuracy in a dataset.
2. Completeness
Data is incomplete when some fields and attributes are missing. This can be caused by accidental deletion of existing data or simply because data was not available at the time when it was collected.
3. Consistency
When data originates from multiple sources, it is important it is aggregated consistently.
(Mushtaq, 2019)
In theory, log generation should be configured in such way that it would ensure data consistency and accuracy from the start. However, real-world data is often incomplete, contains errors and discrepancies and is not centralized due to reasons described above. This is when preprocessing comes into the picture.
During preprocessing, log files that contain raw data can be parsed, cleaned, normalized and transformed. (Sharma, 2018) Depending on what is compromising the quality of the dataset, a variety of actions can be performed upon the dataset. Those actions are:
1. Data cleaning: replace missing values, correct inconsistencies, remove noise (random variance in data) and outliers
2. Data transformation: normalize data
3. Data reduction: reduce the volume of the dataset without losing the integrity of original for easier data handling
4. Data integration: propagate data and detect conflicting values
(Mushtaq, 2019)
3.2.1 Data cleaning
Based on the kind of data cleaning that needs to be performed, different techniques are applied. Data can be replaced, transformed, removed. Let’s take a closer look at how raw data can be transformed in one of the sample log files.
Timestamps identify when a specific event occurs in the system. Sloppy implementations can make it difficult to parse log data if the timestamps are incorrect and inconsistent.
In this particular case, one thing that requires attention is the timestamp format. It is currently non-standard, missing the year attribute and the timezone offset.
Oct 3 12:26:30 example.info: This is an example log message......
The example above illustrates a sample timestamp from the log file. It needs to be converted to ISO format, which is a standard way of representing date- and time-related data. (International Organization for Standardization, n.d.) The date and time objects that follow ISO look like this:
Date: YYYY-MM-DD or YYYYDDD
Time: hh:mm:ss or hhmmss
For combined time and date representations, the following format is followed:
<date>T<time>
Time zones in ISO can be represented as local time, UTC (+00:00), or an offset from UTC, which is the difference in hours from Coordinated Universal Time. The sample timestamp following the ISO format would look like this due to the missing year attribute:
Preprocessing, or, more specifically, data cleaning, will add the year attribute to the timestamp.
For now, it will be assumed that the log file is created in 2019 and the device is located in Finland. Hence, the timestamp will need to look like:
2019-10-03T12:26:30+02:00
Python will be the chosen programming language for wrangling the timestamps in the log files. To do that, the following modules need to be imported: `datetime`, `pytz`, `re`:
- `datetime` provides classes for manipulating date and time objects.
- `pytz` provides a database of time zones
- `re` provides regular expression matching
(Python Software Foundation, 2019)
Regular expressions can be used to extract timestamps from the message line:
```python
date_match = re.search(r'\d{4} \d{2} \d{2} \d{2} \d{2} \d{2}', line)
time_match = re.search(r'\d{2}:\d{2}:\d{2}', line)
raw_timestamp = date_match.group() + year + " " + time_match.group()
```
The function `strptime` takes a given string and formats it to a Python `datetime` object:
```python
timestamp = datetime.datetime.strptime(raw_timestamp, "%Y %m %d %H:%M:%S")
```
>>> 2019-10-03 12:26:30
Now the resulting timestamp is a `datetime` object, but is still represented as “native” time without any time zone information. To fix that the `pytz` module imported earlier can be used to define the time zone the device is in, in this case, the Helsinki time zone.
```python
timezone = pytz.timezone("Europe/Helsinki")
aware_timestamp = timezone.localize(timestamp)
```
Now the timestamp is aware of the time zone it is in:
The last step is to represent the timestamp according to the ISO format. To achieve that, the `isoformat` function from the datetime module can be used:
```python
iso_timestamp = aware_timestamp.isoformat()
```
Another issue that needs to be addressed is the inconsistency between timestamps in log messages that were combined from different sources. Here is an example of such sample:
```
Apr 29 10:23:02 mila-VirtualBox nm-dispatcher: req:1 'down' [enp0s3]: start running ordered scripts...
Jan 1 02:00:11 mila-VirtualBox anacron[736]: Job 'cron.daily' terminated
Jan 1 02:00:17 mila-VirtualBox dhclient[3533]: DHCPACK of 10.0.2.15 from 10.0.2.2
Jan 1 02:00:24 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5180] address 10.0.2.15
Jan 1 02:00:36 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5188] plen 24 (255.255.255.0)
Jan 1 02:00:37 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5195] gateway 10.0.2.2
Jan 1 02:00:43 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5209] lease time 86400
Jan 1 02:00:48 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5209] lease time 86400
Jan 1 02:01:12 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5215] nameserver '192.168.135.11'
Jan 1 02:02:40 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5215] nameserver '192.168.135.12'
Apr 29 10:26:02 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5215] nameserver '192.168.135.11'
Apr 29 10:26:02 mila-VirtualBox NetworkManager[734]: <info> [1574416265.5215] nameserver '192.168.135.12'
```
Here some of the log messages do not have access to the real-time clock and generate incorrect timestamps. This complicates log analysis if the raw data is not transformed before being ingested into further storage.
In order to extrapolate the new timestamps, the information provided by the incorrect timestamps will be used. It is possible to estimate how much time has roughly passed before time is set to the correct value by looking at the last line with incorrect time:
Using this value and the values in each message line, new timestamp values can be extrapolated. First, the correct timestamps should be formatted to a datetime object:
```python
>>> ('2019-04-29 10:26:02', '02:02:40')
```
The hour attribute needs to be removed:
```python
Correct_time = datetime.datetime.strptime(correct_time_str, "%Y-%m-%d %H:%M:%S")
```
By subtracting the amount of minutes passed during the incorrect timestamp phase and adding the time passed in each individual line, the correct representation of the timestamp can be achieved:
```python
old_time_str = incorrect_timestamp_in_the_message_line
old_time = datetime.datetime.strptime(old_time_str, "%H:%M:%S").replace(hour=0)
new_time = correct_time - minutes + old_time
```
By using the previously mentioned functions to format the timestamps to ISO and make sure they are aware of the time zone, this results in:
```
2019-04-29T10:23:02+03:00
2019-04-29T10:23:33+03:00
2019-04-29T10:23:39+03:00
2019-04-29T10:23:46+03:00
2019-04-29T10:23:58+03:00
2019-04-29T10:23:59+03:00
2019-04-29T10:24:05+03:00
2019-04-29T10:24:10+03:00
2019-04-29T10:24:34+03:00
2019-04-29T10:26:02+03:00
```
2019-04-29T10:26:02+03:00
As mentioned before, clean and concise datasets are crucial in log analysis. It is important to understand where the data comes from especially in scenario where data sets have a geographic scope (e.g. devices are located in different time zones while the log files log timestamps as native values).
This example took a closer look at timestamps, and how they can be formatted and cleaned up for further log processing. In order to get a clear picture of what is happening in the log file, one can ask some key questions before parsing the data:
1. How were the log files generated? (Was the information acquired from multiple sources?)
2. Are the timestamps standardized?
3. Are the timestamps aware of the time zones the device or system is located in?
4. How are timestamps created in the log files?
5. If the timestamps are inconsistent, where can you get relatively correct information?
Incorrect information can pose challenges to someone who is trying to perform log analysis. However, proper preparation and understanding the data set you are working with can help you to smoothly preprocess the data. This example demonstrated how to programmatically handle an issue with timestamps in a given log file by using a simple piece of code.
4 ELASTIC STACK
Companies have been using logs to identify potential performance issues and to monitor their systems for a while now. The one thing that has become a challenge when it comes to log management is the amount of data that is constantly being produced. Before software engineers could simply log into their application and use `grep` to search through the log files. Nowadays to perform proper log analysis engineers need to use more advanced and modern log management tools.
Splunk has been the market leader specifically for log management (and handling big data in general) for quite a long time. It offers numerous functionalities for searching and analysing machine-generated big data which can be setup on-premise as well as on the cloud. While Splunk is a mature product that offers a large variety of tools and is used by 12,000 users including companies such as Atlassian, Adobe, Coca-Cola and Porsche, it comes with a high price tag. (Yigal, 2017)
The Elastic Stack, on the other hand, hasn’t been around for too long. Yet it is downloaded about 500,000 times every month, and the number keeps growing. Why is it so popular? The Elastic Stack is open source, which most likely explains the popularity. The main benefits of using open-source solutions for IT organizations is being able to avoid vendor lock-in and ensure better security. The latter comes from having access to the source code which might mean issues will get discovered earlier since it is possible to perform code audits. (Poortvliet, 2017)
4.1 The Elastic Stack overview
Elastic stack, previously known as ELK, is a group of open-source products developed by Elastic that consists of Elasticsearch, Logstash, Kibana and Beats, which together provide an end-to-end log analysis with deep searching, analysis and visualization tools. First developed separately and for different purposes, those components were merged into a single stack in 2012 (Beats were introduced in 2015, which turned ELK into the Elastic Stack). Elasticsearch is a search engine, Logstash is a log aggregator and Kibana is a visualization tool. Beats is a collection of data shipping agents. The Elastic Stack is available on-premises as well as SaaS (Software as a service) on the Elastic cloud. The on-premises solution features a few types of subscriptions: open source, basic, Gold, Platinum and Enterprise. The last three are paid and each offer extra services at an increased cost. (Elastic.co, 2019b)

4.2 Elasticsearch
Elasticsearch is a powerful open source search and analytics engine for all types of data based on the Apache Lucene library. It features a RESTful API, is schema-free and has a web interface, and can do a full-text search and index documents in near real-time. To handle high data volume, Elasticsearch uses shards to distribute high workload. (Elastic.co, 2019c)
Every Elasticsearch instance is a cluster of one or multiple nodes. It is possible to run a single node cluster, but efficient searching and indexing capabilities of Elasticsearch come from the ability to distribute tasks across multiple nodes. Those nodes can be
assigned different tasks such as storing data and executing data-related operations like searching. Other nodes can be responsible for forwarding cluster requests or for pre-processing documents. One of those nodes is always a master node, which manages and configures all the actions in the cluster. Every node in a cluster can handle HTTP and Transport traffic and every node is aware of other nodes. (Berman, 2018)
Elasticsearch is essentially a NoSQL database: it can ingest structured or unstructured data. Elasticsearch does not use rows or columns to store information, instead, it uses serialized JSON documents. A collection of related documents is called an index. In multi-node clusters those documents are usually distributed across the whole cluster to enable immediate data access from any node. Furthermore, to protect data from hardware failures and to increase capacity, Elasticsearch uses shard replicas. Essentially an index is a group of physical shards: each document within an index has one primary shard. In Elasticsearch those primaries have their own copies, replicas. If some shards or nodes fail, Elasticsearch is still able to perform a proper search by doing so in parallel on those replicas. (Elastic.co, 2019c)

In a typical Elastic Stack pipeline, data will be parsed with Logstash first, and then the resulting JSON is indexed into Elasticsearch. As mentioned in chapter 2, indexing is what makes these Elasticsearch documents searchable in near real-time. This speed comes from a data structure called an inverted index, which contains a list of unique words that can be found in a document as well as a list of documents for each word in which it
appears. This means when a user is searching for a specific term, it can be looked up only once, which enables a very fast full-text search. (Elastic.co, 2019c)
For indexing and searching data, Elasticsearch provides a simple REST API. To put a document into some index, the following PUT request can be used:
```
PUT /customer/_doc/1
{
"name": "John Doe"
}
```
This request adds a new document with a unique ID of 1 and puts it into the “customer” index. If such index does not already exist, Elasticsearch creates one automatically.
To retrieve the newly created document, the following GET request can be used:
```
GET /customer/_doc/1
```
The response looks like this:
```
{
"_index" : "customer",
"_type" : "_doc",
"_id" : "1",
"_version" : 1,
"_seq_no" : 26,
"_primary_term" : 4,
"found" : true,
"_source" : {
"name": "John Doe"
}
}
```
(Elastic.co, 2019c)
4.3 Logstash
Logstash is an open source collection of tools used for collecting, processing and forwarding data. It is based on input-plugins that enable it to ingest various kinds of data from a multitude of sources including Beats, Azure Event Hubs, Java standard input, Twitter, etc. (Elastic.co, 2019f)
When data is travelling through the Elastic stack pipeline from source to storage, it is queued in memory. Logstash is capable of performing powerful transformation and preparation regardless of the format and complexity. To do that Logstash reads queued data in small batches and uses a large number of filter plugins which are aimed at different types of data processing. (Elastic.co, 2019f)
After data processing is completed, Logstash can route data to a variety of outputs such as Elasticsearch, Google Cloud Storage, MongoDB, and standard output to name a few. It is also possible to configure a custom output using an API for plugin development.
Data processing can be handled in one or multiple pipelines (one by default). This allows to separate logical flows with different types of input and output for each pipeline. Here is an example of a single pipeline:

Picture 7. Logstash pipeline. (Dahlqvist, 2018)
The scope of this work focuses on ingesting data from Filebeat and forwarding it to Elasticsearch using Logstash. Other pipeline variations are not going to be reviewed.
To parse log data Logstash uses filter plugins such as JSON, XML, CSV and Ruby to name a few. Logstash is also capable of splitting multi-line messages into separate events, aggregating metrics data, removing special characters, extracting numbers from strings, adding geographical information from IP addresses and much more. However, log data exists in so many forms that there might not be a perfect filter for it. (Dahlqvist, 2018) For example, log messages below depict example Squid cache access data:
```
1524206424.034 19395 207.96.0.0 TCP_MISS/304 15363 GET
http://elastic.co/android-chrome-192x192.gif - DIRECT/10.0.5.120 -
1524206424.145 106 207.96.0.0 TCP_HIT/200 68247 GET
image/gif
```
Parsing log data in text form, such as in the example above, is possible with two common Logstash filters: dissect, which works with delimiters, and grok, which uses regular expression matching. (Dahlqvist, 2018)
The dissect filter works best if the structure of log data is consistent. Instead of matching specific patterns like in regular expressions, dissect matches delimiters, for example, spaces. Whatever is left between those delimiters becomes one of the parsed fields. (Dahlqvist, 2018)
To parse one of the earlier examples:
```ruby
filter {
dissect {
mapping => {
"message" => "%{timestamp->} %{duration} %{client_address} %{cache_result}/%{status_code} %{bytes} %{request_method} %{url} %{user} %{hierarchy_code}/%{server} %{content_type}"
}
remove_field => ["message"]
}
}
```
The result looks like:
```json
{
"user" => "-",
"content_type" => "-",
"host" => "localhost",
"cache_result" => "TCP_MISS",
"@timestamp" => 2018-04-24T12:43:07.406Z,
"duration" => "19395",
"request_method" => "GET",
"url" => "http://elastic.co/android-chrome-192x192.gif",
"timestamp" => "1524206424.034",
"status_code" => "304",
"server" => "10.0.5.120",
"@version" => "1",
"client_address" => "207.96.0.0",
"bytes" => "15363",
"path" => "/home/logstash/testdata.log",
"hierarchy_code" => "DIRECT"
}
```
Grok uses regular expressions and needs specific patterns for matching. Matching is done from left to right. There is a list of common standard patterns such as WORD, which
matches a single word, or IP, which matches an IPv4 or IPv6 IP address. GREEDYDATA matches all of the remaining data. (Dahlqvist, 2018)
```
filter {
grok {
match => {
"message" => "%{NUMBER:timestamp}%{SPACE}%{GREEDYDATA:rest}"
}
}
}
```
If it is possible, dissect should be preferred over grok since the latter can slow up the whole pipeline if used incorrectly. Failed matches increase performance penalties, which obviously leads to unhappy users. In fact, dissect was developed as an alternative solution for battling potential grok issues. (Boertje, 2016)
4.4 Kibana
Kibana is an open source data visualization platform built to work with Elasticsearch. Kibana is used for viewing, searching and monitoring data stored in Elasticsearch indices through a browser-based interface. On top of that, with Kibana users can get a better understanding of large volumes of data by visualizing it with pie charts, histograms, heat maps and line graphs. (Elastic.co, 2019e)
To view specific data in Kibana the index pattern needs to be selected. An index pattern defines Elasticsearch indices that contain information the user wants to access and view.
Picture 8. Kibana Discover page. (Elastic.co, 2019e)
The Discover page (Picture 8) shows documents indexed into the thesis-index. The number of matching documents is referred to as hits, which is displayed in the toolbar. By default, Kibana sorts the hits in reverse chronological order, showing the newest documents first. (Elastic.co, 2019e)
To perform a search across the indices, Kibana uses its own standard Kibana Query Language (KQL). It’s simple, includes a scripted field support as well as autocomplete. It follows the Lucene key:value syntax, which uses a colon to separate fields from values.
response:200
The query above will match every document with the field “response” which has the value of 200.
message:"Hello World"
This query will search for an exact match in the message field. Skipping the quotation marks will result in breaking the phrase into separate tokens, and any document with either “Hello” or “World” in the message field will get matched.
Visualizations in Kibana are based on Elasticsearch search queries. Aggregated data that is extracted and processed can be used to create different kinds of charts, tables and maps to gain deeper insight into data. All of the pie charts, time series tools, geo maps and data tables can be combined into a dynamic custom dashboard. (Elastic.co, 2019e)
4.5 Beats and Filebeat
Beats is a collection of lightweight data shippers used for sending data from different sources and machines directly into Logstash or Elasticsearch. Beats were introduced in Elastic Stack 5.0 and consist of Filebeat, Metricbeat, Packetbeat, Winlogbeat, Auditbeat, Heartbeat, Functionbeat, each designed to ship different kinds of data. Since the scope of this work focuses on log data, only one kind of Beats shipper, Filebeat, will be reviewed. (Elastic.co, 2019a)
Before Beats and Filebeat were introduced to Elastic, Logstash was used for both handling the data transformation (including filtering and aggregation) and pulling logs from multiple sources and pushing them down the data pipeline, usually into Elasticsearch. However, with Logstash handling both data filtering and data pipelines,
the memory consumption would grow drastically and slow down the whole process. This pushed for a change and soon Elastic released Filebeat as a part of the new collection of data shippers.
Filebeat monitors locations that are specified for log files, then collects and forwards them further down the Elastic pipeline. Filebeat can ship data directly to Elasticsearch and it even provides some built-in modules that can parse common log formats such as syslog, Apache logs, MySQL, etc. Filebeat, however, does not offer capabilities for advanced log processing, so it cannot replace Logstash. (Elastic.co, 2019d)
4.6 X-Pack
Originally X-Pack was developed as an extension for the Elastic Stack. It provided extra features such as security, alerting and monitoring, machine learning, etc. Most of those features were only available for the paid subscriptions, but since 2018 X-Pack has been available by default with the basic Elasticsearch installation. Some advanced security and machine learning features of X-Pack are still only available for paid subscriptions. (Kearns, 2019)
X-Pack enables role-based access control as well as TLS encrypted communication. With X-Pack it is possible to add cluster passwords for Elasticsearch, enable anonymous access, and perform message authentication as well as audit security events. (Kearns, 2019)
4.7 Alerting
On top of querying and visualization, Elasticsearch provides another option to monitor and analyze log data – alerting. Alerting comes as a part of the X-Pack. One of those alerting tools is Watcher, which can be used to create various “watches” which monitor log data periodically. (Elastic.co, 2019e)
The alerting features in X-Pack are not available for open-source and Basic subscriptions. One of the free and open-source alternatives is Elastalert. It can be configured in a separate environment from Elasticsearch and only needs to know on which port Elasticsearch is running. (Yelp, 2014)
To start alerting, Elastalert needs to be configured to run a specific rule. Possible rules include:
- **frequency rules**, where X amount of events happen within a specific time period
- **spike rules**, where the rate of events increases or decreases
- **flatline rules**, where there are less than X amount of events within a specific time period
- **blacklist and whitelist rules**, where a field matches one of the lists
- **any rules**, where an event matches a given filter
- **change rules**, where a field changes values within a specific time period
(Yelp, 2014)
Besides the rule type, each rule needs to have defined filters and what alert it sends. Filters are commonly used for partial and full matching of strings, matching integer ranges, and wildcard matching. To send alerts, Elastalert offers multiple endpoints that can be configured with Elasticsearch: e-mail, Slack, JIRA, Telegram and Amazon’s SNS to name a few. (Yelp, 2014)
The example below shows how to set up a frequency rule that sends alerts to a specified Slack channel. The alert is sent if the log in process fails three times within an hour.
name: slack-demo
type: frequency
index: logstash-*
num_events: 3
timeframe:
hours: 1
filter:
- query:
query_string:
query: "message: login failure"
alert:
- "slack"
slack:
slack_webhook_url: "<webhook>"
5 ELASTIC STACK IMPLEMENTATION
5.1 Prerequisites and preparations for implementation
There are a few recommendations that need to be considered before setting up the Elastic stack for deployment: memory allocation, CPU usage, and disk space management.
5.1.1 Memory
Memory is a crucial part of the stack and it is very likely it will be one of the first resources to run out.
Elasticsearch runs on JVM and many of its data structures that are based on Lucene are disk-based formats, which are too large to be loaded into main memory. Hence, it is important that there is enough heap space available for Elasticsearch. The heap size is specified under JVM options and determined depending on the amount of available RAM: no more than half of physical RAM used. Although there are no hard and fast rules regarding these values, the Elastic stack engineers recommend to keep the heap size around 30 to 32 gigabytes. (Elastic.co, 2019c)
Too small heap size can result in memory errors and the application won’t be able to handle multiple applications simultaneously. Java is a garbage-collected language, which means it occasionally has garbage-collection pauses. Those pauses are initiated when some region of memory is full and Java needs to remove data that is no longer needed. During these pauses JVM suspends other operations. This will largely affect the end-user experience as Elasticsearch will reduce the amount of queries and indexing per second that it will be able to perform during these frequent and short pauses. Configuring too large of a heap size is no good either: with a larger heap size, JVM will have infrequent and long pauses, which Elasticsearch is not able to differentiate from a node that is unreachable. When a node hangs, Elasticsearch removes it from the cluster and reallocated its shards. This makes the cluster unstable. (Tedor, 2016)
5.1.2 CPUs
According to the Elastic Stack guide, most Elasticsearch deployments do not require extensive CPU resources. High CPU usage is expected when indexing and querying big
volumes of data, but only for a short period of time. These spikes in CPU usage can occur in JVM if the heap is too small. (Elastic.co, 2019f)
The type of processor does not matter as much as proper configuration of other resources such as memory and disk space.
5.1.3 Disk space
As mentioned in chapter 4, the power of Elasticsearch comes from its ability to distribute tasks across multiple nodes in the cluster. While it is possible to run a functional cluster of one node, adding more nodes increases the capacity and reliability of the cluster. When there is only one node, there are no replica shards and all primary shards are located on the single node. If something happens to this node, data is at risk. However, when the cluster has more than one node, it automatically creates and allocates replica shards. (Elastic.co, 2019c)
Before Elasticsearch allocates new shards (or relocates old ones) to a node in the cluster, it looks at the disk space available on that node. There are default values that control shard allocation. By default, if the node has used up more than 85% of disk space, Elasticsearch won’t allocate shards to it. (Elastic.co, 2019c)
It is also important to keep in mind that the size of raw data will most likely change once it is transformed during indexing. This differs for different types of data and how it was enriched. For example, indexing logs that are in JSON format without adding any additional data to it probably won’t change the size. However, if the data is initially unstructured and needs additional information before being indexed, its size will most likely increase. There is no single way to calculate an exact amount of storage required for indexing data, but one option is to avoid testing with small amount of data before going into production. (Dahlqvist, 2018)
A similar approach should be used with estimating how much querying will take up disk space. To estimate this, the best option is to simulate the levels of querying that will be done in production.
According to a blog on Elasticsearch storage requirements (Dahlqvist, 2017), there are some ways to optimize on-disk storage. One is to avoid using default settings in Filebeat that enable dynamic field mapping. Dynamic field mapping refers to labelling all fields in a document as text in order to allow free text search. With custom mappings it is possible
to label fields that do not require text-search as keywords. This small optimization can save up to 20% of disk space. (Dahlqvist, 2017)
Another solution is to remove unnecessary fields in a document before indexing. When raw data is indexed into Elasticsearch, it goes through enrichment with new fields being added to the log. (Dahlqvist, 2017)
Jan 5 19:49:15 mila-VirtualBox anacron[845]: Normal exit (1 job run)
The syslog message from above will look like this after Filebeat sends to Logstash where it is parsed:
```
{
"ecs" => {
"version" => "1.1.0"
},
"@version" => "1",
"log" => {
"offset" => 70519,
"file" => {
"path" => "/var/log/syslog"
}
},
"host" => {
"name" => "mila-VirtualBox"
},
"timestamp" => "2020-01-05T19:49:15+03:00",
"program" => "anacron[845]",
"input" => {
"type" => "log"
},
"hostname" => "mila-VirtualBox",
"agent" => {
"hostname" => "mila-VirtualBox",
"id" => "3cce9ee7-2f9f-4bb4-9b1c-25be78676753",
"version" => "7.5.0",
"type" => "filebeat",
"ephemeral_id" => "60444a6a-060c-4c15-8a0f-0f669a5eab8c"
},
"@timestamp" => 2020-01-05T20:21:57.387Z,
"message" => "Normal exit (1 job run)",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
```
The highlighted bold and orange fields were added after enrichment. While this new data provides additional information about the log files that can be useful to the user, it also takes up more space. If some of the fields are defined to be unnecessary, it can be
beneficial to remove them to save some storage space. Depending on how many fields are removed, up to 17% of disk space can be saved. (Dahlqvist, 2017)
Another important consideration is data compression, which can have a big impact on disk space. By default, Elasticsearch compresses data before writing it to a disk using a specific algorithm, which can be swapped for another, more aggressive alternative. It comes with a trade-off, which affects the overall indexing performance by using more CPU. It might be a good option in the long run since it saves up a significant amount of storage. (Dahlqvist, 2017)
5.2 Implementation
The implementation in the scope of this study is lightweight and is carried out for the purpose of testing out basic features of the Elastic Stack. The stack was set up on a single server in a virtual environment running Ubuntu 18.04 with Intel Xeon 2 vCPUs and 8 gigabytes of RAM.
Following the memory allocation practices defined in an earlier chapter, Elasticsearch should be assigned roughly half of the memory available on the server. Eight gigabytes of RAM is definitely not enough for large deployments unless there is a large quantity of those small 8-gigabyte machines. Since this implementation was only used to test the basics of the Elastic stack, 8 gigabytes was deemed enough and did not affect overall performance. However, for deploying Elasticsearch to production, the recommended amount of RAM is 64 gigabytes. It is important to remember that Logstash and Kibana should also be accounted when allocating memory in the stack.
To test one of the optimization tricks mentioned earlier in this chapter, a sample of raw log data was ingested into Elasticsearch. The same sample was ingested into a different index, but with reduced field count. The Filebeat fields that are bolded red below were removed from each log file:
```
"hostname" => "mila-VirtualBox",
"agent" => {
"hostname" => "mila-VirtualBox",
"id" => "3cce9ee7-2f9f-4bb4-9b1c-25be78676753",
"version" => "7.5.0",
"type" => "filebeat",
"ephemeral_id" => "60444a6a-060c-4c15-8a0f-0f669a5eab8c"
}
```
The table below compares the sizes of the two samples:
Table 1. Results of the optimization test.
<table>
<thead>
<tr>
<th></th>
<th>Raw data size (MB)</th>
<th>Enriched and indexed data size (MB)</th>
<th>Size compared to raw data (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Original</td>
<td>50.2</td>
<td>69.3</td>
<td>138%</td>
</tr>
<tr>
<td>With reduced field count</td>
<td>50.2</td>
<td>66.2</td>
<td>132%</td>
</tr>
</tbody>
</table>
In this case the amount of disk space saved from removing unnecessary fields is 6%. It may seem small, but it might be worthy in the long run, especially if combined with other methods of disk optimization such as an aggressive compression algorithm or disabling the default dynamic mapping.
As mentioned previously, Elasticsearch does not require extensive CPU resources. During implementation it was observed that the average CPU usage during three weeks of retention period was around 6%. The high spikes when the CPU usage exceeded 60% were only observed briefly during Logstash start-up and data ingestion. It did not affect any of the ongoing processes in the rest of the stack.
5.2.1 Security
Before merging into production, the server responsible for hosting the Elastic stack should have its network settings properly configured since data attacks can come from the network. All the traffic coming to, from and within an Elasticsearch cluster should be encrypted with SSL/TLS. (Elastic.co, 2019c)
Using X-Pack security features, role-based authentication was enabled in the cluster to prevent unauthorized access. Since the cluster was not listening on an external interface, no additional security measures were taken.
With default settings, Elasticsearch assumes that the cluster is in development mode. Hence, it is crucial that security settings are configured before running Elasticsearch. Once the network host is configured, Elasticsearch assumes the cluster is being moved to production. (Elastic.co, 2019c)
5.2.2 Log processing
As mentioned in Chapter 3, the raw log files that are used for this implementation contain erroneous timestamps, which need to be fixed during data preprocessing. Since the data in these log files is unstructured, a custom solution needed to be implemented to handle this issue. Appendix 1 contains a script written in Python that tackles timestamp fixing.
Once the data is preprocessed, it is ready to be ingested into Elasticsearch. Ideally, each device would send its own log files to Elasticsearch, but since log collection is outside of the scope of this study, a custom file location was set up where log files were sorted by device serial numbers. When Filebeat is running, it automatically detects new files added to the file location and forwards them to Logstash.
Logstash parses the log data with a simple configuration file:
```plaintext
input {
beats {
port => "5044"
}
}
# The filter part of this file is commented out to indicate that it is optional.
filter {
dissect {
mapping => {
"message" => "@timestamp" "channel" "log_level" "service" "%{message}""
}
}
grok {
match => {
"/home/mila/anna/%{GREEDYDATA:device}/" }
break_on_match => false
}
mutate {
lowercase => [ "device" ]
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{device}-index"
}
}
```
Logstash uses the log file path to determine which device it is coming from and to index into the correct index. In the future, the configuration file might need to be tweaked depending on how log collection is implemented.
Now the log file that is indexed into Elasticsearch contains two timestamps: `@timestamp`, which is added by Filebeat during the enrichment process, and `log_timestamp`, which is
added by Logstash. Choosing to keep two timestamps can be helpful in case the timestamp fixing fails at any point during data preprocessing. This way the user will still be able to determine an approximate time around which the events in the log files took place.
5.2 Potential issues and drawbacks
The current implementation of the Elastic Stack works exactly how it was intended to: log files are shipped into Logstash, where they are parsed and processed. Afterwards they are ingested into specific indices into Elasticsearch. Kibana can be used to create different types of visualizations, and ElastAlert can be configured to send various kind of alerts. Its initial setup is relatively simple due to the low volume of data that is being ingested, but what happens when the size of the cluster keep growing?
Bigger cluster means higher complexity. While the current implementation of the stack is able to handle log collection, this might not be the case when the amount of log files being ingested into Elasticsearch grows significantly. When the stack is pushed to production, it is most likely that there will be a need for multiple instances of Logstash to handle different data sources. Logstash should also be configured to handle data loss that comes with high spikes in data volume. A queuing and buffer mechanism should be configured to tackle this issue, for example, Kafka or Redis.
Another important consideration when designing the Elastic Stack solution is capacity planning. Earlier in this chapter, it was identified that memory management is crucial when setting up the Elastic Stack to ensure that it continues performing at the same speed after scaling up. This will require serious commitment and a proper long-term storage strategy as the system scales. Additionally, to perform at peak capacity, the system will require the right kind of hardware. Buying additional SSD disks can already be quite expensive. Having mentioned that, unfortunately, complexity is not the only factor that grows as the Elastic Stack implementation scales up and requires more configuration and fine-tuning. Even though this varies for each use case, configuring additional features can financially cripple an organization that is tight on a budget.
Security is another place where costs can add up. As mentioned earlier, proper security configuration is crucial before merging the stack into production. While essential features such as role-based authentication can be configured with the basic subscription X-Pack,
several of its advanced tools are still not available for those tiers: IP filtering, audit logging, token services, field- and document-level security to name a few. Hence, it is very important to identify what kind of security level is expected from the log management system before making a decision whether or not the free subscriptions of the Elastic stack fulfill the criteria. The same planning should take place when considering machine learning, monitoring and alerting tools that are only available for the commercial versions of the Elastic Stack.
All of the examples mentioned above show that setting up a highly available environment with multiple clusters will not be easy and will consume time and resources. The performance of the environment solely depends on how well the Elastic Stack infrastructure is organized to scale from the beginning. More costs add up as the need for additional storage grows. Enabling advanced security features as well as intelligence and analysis tools does not come with free subscriptions. Without commitment, expertise and serious capital investment, maintaining a powerful and highly available Elastic Stack cluster can become a tedious task with significant financial costs.
6 CONCLUSION
Setting up a log management solution is no easy task: before choosing tools and services for building such solution it is important to look at the data before it is stored for analysis. The vast amount of raw data that is constantly being produced results in more occurrences of inconsistencies and errors within the data. High quality data ensures better predictions and results, thus, data preprocessing plays a vital role in data analysis.
Data preprocessing can be performed using various tools. This thesis focused on how to build a custom prototype with Python to fix incorrect timestamps in log files.
The implementation of the Elastic Stack fulfills the basic functionalities of a log management solution. Filebeat is used to collect logs and can be configured to perform simple parsing. Logstash transforms data and prepares it for further ingestion. Elasticsearch stores the data and provides tools for querying. Kibana offers users visual feedback and analysis of the data. In addition to the main components of the stack, open source alternatives such as ElastAlert can be used to configure alerting for specified indices.
In order to reap all the benefits of the Elastic Stack, the current implementation needs to be improved in the future. Additional security measures need to be taken before the stack is merged into production to ensure that the client data is not compromised. Log data that contains personal and sensitive information should also be considered to be anonymized before being ingested into Elasticsearch. Additional dashboards and visualizations in Kibana must be set up in order to perform deeper and more detailed log analysis.
The findings of this study support that the Elastic Stack is a reasonable choice for a log management solution. When choosing between the Elastic Stack and a commercial solution, it should be noted that the former requires more labor and financial resources if the implementation needs to be scaled up. Adding additional features and storage to the stack can take its toll on a tight budget.
REFERENCES
**Source code for data preprocessing**
```python
#!/usr/bin/python
#coding=utf-8
"""
File: fix_timestamps.py
Author: Mila Grigoryeva
Language: Python 3.5.2
Description: Script for fixing timestamps and converting to ISO format in log files
********************************************************************
"""
import datetime
import linecache
import os
import re
import pytz
START_BOOT = 'Booting Linux on physical CPU'
END_BOOT = 'Setting system time'
PATH = ('/home/mila/thesis/"
def iso_timezone(timestamp, d_timezone):
"""Set correct timezone"
timezone = pytz.timezone("Europe/Helsinki")
# For now only consider Nordic timezones
if d_timezone == 1:
timezone = pytz.timezone("Europe/Stockholm")
aware_timestamp = timezone.localize(timestamp)
iso_timestamp = aware_timestamp.isoformat()
return iso_timestamp
def format_time(line, year, timezone):
"""Format timestamp to ISO with UTC offset and add year"
date_match = re.search(r'\w\w \s?\d{1,2} ', line)
time_match = re.search(r':\d{2}:', line)
raw_timestamp = date_match.group() + year + time_match.group()
timestamp = re.search(r'\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}', raw_timestamp, re.search(r'\b %d %Y %H:%M:%S')
formatted_timestamp = iso_timezone(timestamp, timezone)
return formatted_timestamp
def get_correct_time(log_file, correct_index):
"""Get correct time in case log file contains wrong timestamps"
timestamp = None
minutes_passed = None
correct_time_line = linecache.getline(log_file, correct_index)
minutes_passed = re.search(r'\d{2}:', correct_time_line).group()
timestamp = re.search(r'\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}', correct_time_line).group()
time = re.search(r'\d{2}:\d{2}:\d{2}', timestamp).group()
return time, minutes_passed
def create_time(line, correct_time_str, minutes_str, timezone):
"""Extrapolate new timestamps based on the next known correct time"
TURKU UNIVERSITY OF APPLIED SCIENCES THESIS | Lyudmila Grigoryeva
correct_time = datetime.datetime.strptime(correct_time_str, "%Y-%m-%d %H:%M:%S")
minutes = datetime.datetime.strptime(minutes_str, "%H:%M:%S").replace(hour=0)
old_time_str = re.search(r'\d{2}:\d{2}:\d{2}', line).group()
old_time = datetime.datetime.strptime(old_time_str, "%H:%M:%S").replace(hour=0)
new_time = correct_time - minutes + old_time
old_time_str = re.search(r'\d{2}:\d{2}:\d{2}', line).group()
new_time = correct_time - minutes + old_time
return iso_new_time
def replace_time(line, new_time):
"""Replace formatted and correct timestamps in the logfile""
new_line = re.sub(r'\w\w\w \w\w\w ?\d{1,2} \w\w\w \d{2}:+\d{2}:+\d{2}', new_time, line)
return new_line
def main():
"""Convert to correct format and fix timestamps""
wrong_time = False
start_index = []
end_index = []
wrong_lines = None
result = []
new_line = None
dirs = os.listdir(PATH)
logfiles = [logfile for logfile in dirs if os.path.isfile(logfile)]
for logfile in logfiles:
with open(logfile) as log:
for index, line in enumerate(log, start=1):
if re.search(START_BOOT, line):
# Line contains wrong timestamp
wrong_time = True
# Write lines with wrong time to the list
start_index.append(index)
elif re.search(END_BOOT, line):
# Sometimes setting system time line occurs when device
# wakes up from sleep
if wrong_time:
wrong_time = False
end_index.append(index)
# Indices of the lines that contain wrong timestamps
wrong_lines = zip(start_index, end_index)
new_file = open('New-{}'.format(logfile), 'a')
with open(logfile) as log_a:
for index, line in enumerate(log_a, start=1):
# Fix lines with wrong timestamps
for ranges in wrong_lines:
if index in range(ranges[0] - 2, ranges[1] + 1):
result = get_correct_time(logfile, ranges[1])
new_time = create_time(line, result[0], result[1], 0)
new_line = replace_time(line, new_time)
new_file.write(new_line)
new_file.close()
|
{"Source-Url": "https://www.theseus.fi/bitstream/handle/10024/312281/Final-draft.pdf?isAllowed=y&sequence=2", "len_cl100k_base": 13550, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 92234, "total-output-tokens": 17914, "length": "2e13", "weborganizer": {"__label__adult": 0.0003399848937988281, "__label__art_design": 0.0005640983581542969, "__label__crime_law": 0.0005159378051757812, "__label__education_jobs": 0.0038299560546875, "__label__entertainment": 9.381771087646484e-05, "__label__fashion_beauty": 0.00016617774963378906, "__label__finance_business": 0.0009899139404296875, "__label__food_dining": 0.00032711029052734375, "__label__games": 0.0006451606750488281, "__label__hardware": 0.0023593902587890625, "__label__health": 0.0015621185302734375, "__label__history": 0.0003879070281982422, "__label__home_hobbies": 0.0001829862594604492, "__label__industrial": 0.00063323974609375, "__label__literature": 0.0003237724304199219, "__label__politics": 0.0002002716064453125, "__label__religion": 0.00036454200744628906, "__label__science_tech": 0.1514892578125, "__label__social_life": 0.0001232624053955078, "__label__software": 0.06170654296875, "__label__software_dev": 0.7724609375, "__label__sports_fitness": 0.00021338462829589844, "__label__transportation": 0.0004029273986816406, "__label__travel": 0.00017726421356201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65207, 0.08324]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65207, 0.55352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65207, 0.88345]], "google_gemma-3-12b-it_contains_pii": [[0, 72, false], [72, 1366, null], [1366, 1999, null], [1999, 2552, null], [2552, 3324, null], [3324, 4410, null], [4410, 5535, null], [5535, 7039, null], [7039, 8627, null], [8627, 10943, null], [10943, 11960, null], [11960, 13559, null], [13559, 15163, null], [15163, 16727, null], [16727, 18760, null], [18760, 19947, null], [19947, 21195, null], [21195, 22729, null], [22729, 24372, null], [24372, 26106, null], [26106, 27310, null], [27310, 29212, null], [29212, 30741, null], [30741, 31965, null], [31965, 34067, null], [34067, 35730, null], [35730, 37153, null], [37153, 37376, null], [37376, 39428, null], [39428, 41813, null], [41813, 43499, null], [43499, 45689, null], [45689, 47746, null], [47746, 49572, null], [49572, 52099, null], [52099, 53326, null], [53326, 55398, null], [55398, 57133, null], [57133, 58835, null], [58835, 60520, null], [60520, 60698, null], [60698, 62713, null], [62713, 65207, null]], "google_gemma-3-12b-it_is_public_document": [[0, 72, true], [72, 1366, null], [1366, 1999, null], [1999, 2552, null], [2552, 3324, null], [3324, 4410, null], [4410, 5535, null], [5535, 7039, null], [7039, 8627, null], [8627, 10943, null], [10943, 11960, null], [11960, 13559, null], [13559, 15163, null], [15163, 16727, null], [16727, 18760, null], [18760, 19947, null], [19947, 21195, null], [21195, 22729, null], [22729, 24372, null], [24372, 26106, null], [26106, 27310, null], [27310, 29212, null], [29212, 30741, null], [30741, 31965, null], [31965, 34067, null], [34067, 35730, null], [35730, 37153, null], [37153, 37376, null], [37376, 39428, null], [39428, 41813, null], [41813, 43499, null], [43499, 45689, null], [45689, 47746, null], [47746, 49572, null], [49572, 52099, null], [52099, 53326, null], [53326, 55398, null], [55398, 57133, null], [57133, 58835, null], [58835, 60520, null], [60520, 60698, null], [60698, 62713, null], [62713, 65207, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65207, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65207, null]], "pdf_page_numbers": [[0, 72, 1], [72, 1366, 2], [1366, 1999, 3], [1999, 2552, 4], [2552, 3324, 5], [3324, 4410, 6], [4410, 5535, 7], [5535, 7039, 8], [7039, 8627, 9], [8627, 10943, 10], [10943, 11960, 11], [11960, 13559, 12], [13559, 15163, 13], [15163, 16727, 14], [16727, 18760, 15], [18760, 19947, 16], [19947, 21195, 17], [21195, 22729, 18], [22729, 24372, 19], [24372, 26106, 20], [26106, 27310, 21], [27310, 29212, 22], [29212, 30741, 23], [30741, 31965, 24], [31965, 34067, 25], [34067, 35730, 26], [35730, 37153, 27], [37153, 37376, 28], [37376, 39428, 29], [39428, 41813, 30], [41813, 43499, 31], [43499, 45689, 32], [45689, 47746, 33], [47746, 49572, 34], [49572, 52099, 35], [52099, 53326, 36], [53326, 55398, 37], [55398, 57133, 38], [57133, 58835, 39], [58835, 60520, 40], [60520, 60698, 41], [60698, 62713, 42], [62713, 65207, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65207, 0.02308]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
529e12261a7eaf1259803c81c1386566f607cd14
|
2017 State of Application Security: Balancing Speed and Risk
A SANS Survey
Written by Jim Bird
Advisors: Eric Johnson, Barbara Filkins and Frank Kim
October 2017
Sponsored by
Rapid7, Synopsys, Tenable, Veracode, and WhiteHat Security
The speed of software development is accelerating—and so are software security risks. Large software development projects that used to take years to complete have been outpaced by smaller, agile teams that deliver working software every few weeks. High-speed cross-functional DevOps teams are pushing software changes directly to production, sometimes hundreds or even thousands of times each day. Organizations are taking advantage of cloud platforms and on-demand services, containerization, and automated build and continuous delivery pipelines to accelerate delivery cycle times and cut costs to the bone.
All of this radically changes how development teams—and their security/risk management teams—think and work.
What does security look like in a world of continuous change? How can security teams possibly keep up, since they rely only on gate reviews and penetration testing to understand and control risk? What security procedures, tools and practices work better in a high-velocity development program? And, can agility and velocity be used to improve security?
In our fifth annual survey on application security, 214 IT professionals responded to these questions. We wanted to learn how respondents are balancing speed and risk, so we compared the results of fast development teams that push out new programs and updates in a week or less to the results of slow teams, which take longer.
We compared how respondents test applications being pushed out into production, including what tools respondents’ organizations used, how often and when they tested their applications, who was responsible for testing, and how satisfied they were with their application security (AppSec) programs overall.
Key Findings
- 43% of organizations are pushing out changes weekly, daily or continuously.
- 66% of respondents report that only 10% or fewer of discovered vulnerabilities per month are critical and in need of immediate remediation, indicating that they are dealing with too much noise in their security assessments.
- 41% of critical vulnerabilities are fixed within one week, another 34% within one month.
1 Check out the previous application security surveys:
We also looked at how quickly and effectively they fixed problems. What we found was that application security assessment is, on the whole, moving faster. But some organizations are falling far behind in their testing: 24% rely on testing security once a year or less, much too infrequently to support the increased speed of development, while 10% still are not testing or assessing their business-critical applications at all. Most organizations are still relying heavily on audits and external reviews, pen testing and other manually intensive processes to find security vulnerabilities.
The good news is that organizations able to make changes to their code more quickly are also fixing more security vulnerabilities than their slower-moving competitors. They are achieving this by breaking down organizational silos, moving more responsibility for security testing directly to developers or cross-functional teams—and by taking advantage of end-to-end workflow automation, which integrates security into Agile and DevOps toolchains so they can test security faster and more often.
These and other risks and best practices are reported in the remainder of this paper.
Application security risks and threats are constantly changing. In this survey, more than 15% of organizations experienced breaches related to their applications in the past two years. While the major contributors to security incidents continued to be public-facing web applications and Windows OS, followed by legacy applications, we saw an increase in successful attacks against applications in the cloud—and now against containers.
**Survey Background**
Respondents came from a wide range of industries, including banking or finance (18%), technology (17%), cyber security services (10%), healthcare (8%) and application development firms (8%). Most respondents were from the United States (72%), with global representation across all sizes of organizations from small (up to 1,000 employees, 37%) to very large (over 50,000 employees, 20%). Reflecting the SANS community, 69% of respondents worked in security- or compliance-related roles, from hands-on administrators and analysts to senior managers and C-level executives.
For much of this survey, we sorted answers based on how frequently respondents’ organizations deployed changes to their production systems:
- **Fast (and really fast)**—deploying changes weekly, daily or continuously (several times per day), tending to follow more agile and lean incremental change approaches, including DevOps and continuous delivery
- **Slow**—rolling out changes monthly, quarterly or annually, following a more traditional approach to change
Let’s look at where organizations face risk, how they address risks, what tools and practices they rely on, and what their priorities are.
Risk at the Application Level
Organizations continue to be mainly focused on protecting public-facing web applications and other custom applications developed in-house. Applications in the cloud (private clouds and to a lesser extent public clouds) and mobile apps are also important areas of focus, as illustrated in Figure 1.
What types of applications are you protecting under your AppSec program? Select those that most apply.

APIs are becoming a specific area of focus for 42% of organizations, and 28% of organizations are now dealing with applications hosted in containers such as Docker. In our 2016 survey, we asked which apps organizations were spending their resources on, and the answers were similar: public-facing web apps, followed by legacy apps, then customized apps, mobile apps and APIs.²
Risks and Breaches
Over the past two years, 15% of organizations responding to this survey experienced a breach, and, alarmingly, 21% don’t know whether they experienced a breach where applications were the source. This number is lower than in our 2016 survey, in which 23% of respondents reported their applications were the source of their breaches.3
This year, the biggest sources of breaches continued to be public-facing web applications and Windows OS, closely followed by legacy applications (which are often left untested because security teams either aren’t aware of them or don’t have access to their source code). Custom applications are another common target of attack. We are also seeing more successful attacks against APIs and applications in the cloud—and now containers, as shown in Figure 2.
What applications or components were involved or were the cause of these breaches, and how widespread was their impact? Leave blank those that don’t apply.

4 www.cisecurity.org/controls
Speed Versus Breaches
In looking at respondents that experienced a breach and comparing their breach experience based on their speed of deploying changes, organizations that are changing continuously, daily or weekly are not experiencing more problems than organizations that make changes only annually. See Figure 3.
Over the past two years, have any of your applications been the source of breaches, attacks on others or sensitive data leaks?
Risk at the Language Level: New Languages, New Risks
Because different programming languages and toolsets present different challenges and opportunities to engineering and security teams—directly affecting how they deliver and test—it is important to understand security risk at the language and library level. See Figure 4.
Java and .NET continue to be major sources of security risk because they are still the most commonly used enterprise application development languages. However, JavaScript has recently overtaken .NET as a risk concern, reflecting its increasing popularity as a lighter-weight alternative. In 2016, Java led as the source of risk for 55% of respondents, followed by .NET for 44% and JavaScript for 40% of respondents.5
JavaScript is widely used to develop client applications, taking advantage of powerful front-end frameworks, such as Angular(JS), React and Ember (and libraries such as JQuery), and increasingly for server-based applications using Node.JS. These frameworks are an additional source of security risks. JavaScript and other dynamic scripting languages, for example PHP and Python, are also more difficult to check at build time than static languages, which means that more problems can escape to be found at runtime.
C/C++ continues to be a source of risk both because of lack of safe programming constructs and because these languages are often used to solve low-level programming problems, such as OS and platform services, device drivers or real-time/embedded software.
---
**Polyglot Programming: Flexibility Brings New Risks**
Polyglot programming, where development teams write code simultaneously in several different languages, is increasingly common in modern Agile and DevOps (continuous delivery/continuous integration) environments.
In polyglot programming, developers are encouraged to choose different languages, frameworks and runtimes based on what they believe is best suited to the specific problem they may be trying to solve or to learn about a new language or tool set. In microservices environments, where small, self-directing teams are each responsible for a specific service, polyglot programming can result in hundreds of different technologies that need to be tracked, understood and secured.
Automated toolchain support and even integrated development environment (IDE) support may be limited or nonexistent for new languages and frameworks. This is especially true for Static Application Security Testing (SAST) and software component analysis (SCA) tools, both of which constitute an important part of many security assessment programs, as we’ll see in this analysis. Organizations will need to develop secure coding guidelines, as well as review and assess their application frameworks for security capabilities and risks.
---
Platform Risks: Cloud and Containers
Cloud platforms and, more recently, containers are becoming an important part of IT programs to reduce operational costs and increase agility. Organizations can take advantage of scale, standardization and on-demand capacity for what Netflix, a pioneer in this space, calls “undifferentiated heavy lifting” and “NoOps”: simplifying and abstracting operations and making it transparent to developers.
Cloud services and containers allow developers to provision their own infrastructure on the fly, making it even easier and faster for them launch new applications—and to make mistakes that could have an impact on reliability and security. However, they also introduce risks around identity and access control, untrusted images, security orchestration, container “breakouts” and more.
As you can see from Table 1, 21% are currently hosting apps in the public cloud, and 31% plan to have apps running in the public cloud within next 2 years.
<table>
<thead>
<tr>
<th>Where are applications hosted?</th>
<th>2017</th>
<th>Next 2 Years</th>
</tr>
</thead>
<tbody>
<tr>
<td>Public cloud</td>
<td>21.4%</td>
<td>30.5%</td>
</tr>
<tr>
<td>Private cloud</td>
<td>27.8%</td>
<td>31.9%</td>
</tr>
<tr>
<td>Hybrid</td>
<td>11.3%</td>
<td>15.6%</td>
</tr>
<tr>
<td>On-premises/Traditional data center</td>
<td>63.3%</td>
<td>49.4%</td>
</tr>
</tbody>
</table>
Security teams must catch up and understand these architectures and how to keep them secure.
Organizations continue to depend heavily on monitoring (IDS), vulnerability scanning, and identity and access management (IAM)—all classic security controls. Security training for developers is also seen as key, although not as important as in our 2016 survey, where it was by far the most valuable practice. Least useful: the sexy new stuff (such as Runtime Application Self-Protection [RASP] and cloud-based controls) and virtual patching, which, as we saw in earlier surveys, requires a high level of coordination between development, operations and tool suppliers. In Figure 5, we look at which security practices are used by slow- and fast-moving organizations.
**Figure 5. Comparing Speed of Change and Security Practices Used**
<table>
<thead>
<tr>
<th>Security Practice</th>
<th>Fast</th>
<th>Slow</th>
</tr>
</thead>
<tbody>
<tr>
<td>Periodic vulnerability scanning</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Identity and/or access controls</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Continuous vulnerability scanning (continuous monitoring)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Continuous monitoring for signs of attacks and IOCs</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ongoing security training for developers and/or application managers</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Security architecture and design reviews</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Web application firewall (WAF)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Preproduction vulnerability scanning</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Threat modeling</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Threat intelligence on application vulnerabilities</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cloud-based application security management services</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Virtual patching</td>
<td></td>
<td></td>
</tr>
<tr>
<td>RASP (Runtime Application Self-Protection)</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
7 See the series of AppSec surveys:
Managing Application Security Risks (CONTINUED)
Continuous monitoring and continuous vulnerability scanning are especially important in fast-moving organizations that deploy changes at least weekly to catch problems that might get past testing and reviews. Security architecture and design reviews are not used as much as they are in slower-moving organizations, where reviews can be added as a stage gate. In fast-moving, iterative development environments, it is not obvious whether and when reviews should be scheduled and how they should be done.
Managing Cloud Risks
The same controls used to protect traditional data centers are also the most common controls used to protect systems in the cloud: Most organizations are relying on classic network security protection (IDS/IPS, firewalls) and account management controls, as shown in Figure 6.
Only a handful of organizations are taking advantage of newer cloud-based runtime protection services, including microsegmentation, cloud access security brokers (CASBs) or RASP. However, a significant percentage is using encryption, auditing and other mechanisms to protect data and customer privacy in the cloud.
Moving to the Cloud for Security Reasons
The arguments in favor of the cloud for operational cost savings are obvious, especially to online startups and businesses with highly-varied demand cycles. But many organizations—especially enterprises and government agencies—have resisted moving to cloud services because of security, privacy and compliance reasons.
This is now changing, as major cloud providers continue to make massive investments in infrastructure security and availability, expanding and improving their operational controls, and now offering comprehensive security and compliance capabilities as part of their platforms.
Today, organizations are moving to the cloud not only because of operational economies of scale, but also to take advantage of these security and compliance strengths. One example is Capital One, whose CIO has gone so far as to state that he believes that, by leveraging the compliance and security services of its key cloud service providers, its applications are safer in the public cloud than in its own data centers.8
8 https://aws.amazon.com/solutions/case-studies/capital-one
Keeping Up with the Rate of Change
Although 10% of respondents say they aren’t doing any security testing at all, 85% of respondent organizations are assessing or testing the security of their mission-critical applications. Of these, 12% are doing security testing on a continuous basis. At the other extreme, 24% of all organizations are still relying on testing once a year or less.
Fast-moving organizations are more likely to be following Agile or DevOps practices such as continuous delivery.
Fast-moving organizations (those deploying continuously, daily or weekly) made up 43% of the survey base—of those, only a small percentage are pushing changes continuously (5%). The remainder (57%) deployed more slowly.
### Security Testing and Speed of Delivery
Teams that are moving faster should also be testing faster. To understand whether security testing is keeping up with the speed of delivery, we compared how often organizations make changes to how often they do security assessments. Results from organizations considered to be “fast” are highlighted in green. See Table 2.
<table>
<thead>
<tr>
<th>Frequency</th>
<th>Changes</th>
<th>Assess/Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>Continuously (several times each day)</td>
<td>5.3%</td>
<td>11.9%</td>
</tr>
<tr>
<td>Daily</td>
<td>12.0%</td>
<td>11.3%</td>
</tr>
<tr>
<td>Weekly</td>
<td>25.4%</td>
<td>19.1%</td>
</tr>
<tr>
<td>More than once per month</td>
<td>17.7%</td>
<td>12.5%</td>
</tr>
<tr>
<td>Monthly</td>
<td>18.7%</td>
<td>19.1%</td>
</tr>
<tr>
<td>Quarterly</td>
<td>13.4%</td>
<td>17.3%</td>
</tr>
<tr>
<td>More than once per year</td>
<td>3.8%</td>
<td>13.1%</td>
</tr>
<tr>
<td>Annually</td>
<td>1.9%</td>
<td>21.4%</td>
</tr>
<tr>
<td>Less than once per year</td>
<td>1.9%</td>
<td>2.4%</td>
</tr>
</tbody>
</table>
Looking at the entire sample, that appears to be the case. But, do teams that move faster also test more often? Table 3 reveals that the faster the development environment, the more frequently testing is done.
<table>
<thead>
<tr>
<th>Speed of System Change</th>
<th>Frequency of Testing Based on Fast or Slow Rate of System Change</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Frequency of Security Assessment/Testing</td>
</tr>
<tr>
<td></td>
<td>Continuous</td>
</tr>
<tr>
<td>Fast</td>
<td>8.54%</td>
</tr>
<tr>
<td>Slow</td>
<td>3.66%</td>
</tr>
</tbody>
</table>
What Is Driving the Need for Speed?
Time to market drives speed, of course, as organizations outrace competitors to deliver a new idea or service and establish market leadership. But there are many more reasons to speed up software delivery and the related security controls, such as:
- Shaping design using feedback from users in production through A/B experiments and controlled feedback loops
- Putting changes into the hands of users early to see what they like or don't like and what they use and don't use, instead of guessing and missing—or overdesigning
- Reducing development and delivery costs by eliminating waste, automating and standardizing work, and reusing code components
- Reducing project risks and business risks by quickly delivering minimum viable products (MVPs) stripped to the essentials, so that organizations can learn whether they are on the right track, and pivot toward a new design or use, or fail early and save resources
- Reducing operational risks by breaking big projects into smaller and simpler changes that can be tested and delivered in steps, eliminating the risks and impact of big bang rollouts
Leveraging speed in these ways has led to the incredible success of organizations such as Amazon and Google, enabling them to achieve true agility, or continuous delivery, at scale.
Testing and Delivery Velocity
As engineering teams continue to accelerate delivery, security personnel need to speed up security assessments. Security teams have traditionally depended on manual gate reviews in waterfall projects, especially audits and pen testing—practices that are mandated by compliance regimes such as PCI DSS. But how do you fit gate reviews and pen testing into continuous iterative development and delivery?
Even automated scanning can take hours or days to complete for large applications. That won’t work for apps that are deployed daily or several times per day.
Risks and Opportunities
To keep up with high-velocity delivery teams, security testing needs to be automated, fast, incremental, and made an in-line part of development and delivery workflows and pipelines.
Increased speed in Agile, DevOps and continuous deployment introduces new risks, including:
- Changes being made so quickly, and so often, that it is difficult to understand and review them for risk
- Lack of stage gates in iterative, incremental development and continuous flow, which means there are no natural points to insert reviews, tests or other controls
- Not enough time to do exhaustive testing or reviews before changes get pushed to production
- Constantly changing design, which means that the risk profile is also constantly changing
Speed introduces new opportunities to reduce risk, too:
- Frequent delivery drives teams to automate and standardize workflows, especially build-and-deploy pipelines, increasing control over and transparency into change, and reducing risk of unauthorized changes or insider attacks.
- Most changes are incremental and small, which makes it easier to understand and test, and safer to release each change.
- Research shows that constantly changing the attack surface of a system can make an attacker’s job more difficult.9
---
9 https://pdfs.semanticscholar.org/1148/f37a8ca0a5ca0a26178c7d85a063bd539725.pdf
Security Testing Tools and Practices
As illustrated in Figure 7, most organizations are still heavily dependent on manual testing and reviews, including pen testing and external compliance audits (required by regulations such as PCI DSS). This reflects the importance of compliance in driving security programs and controls, something we have looked at in earlier application security surveys.
How does your organization test applications for vulnerabilities? Select all that apply.
- Internal penetration testing
- Third-party penetration testing
- Automated code review or Static Application Security Testing (SAST)
- Compliance reviews or audits by a third party
- Manual code review
- Dynamic Application Security Testing (DAST)
- Container security scanning
- Open source and third-party component analysis
- Interactive Applications Security Testing (IAST)
- Runtime Application Self-Protection (RASP)
- Other
TAKEAWAY:
Pen testing and third-party reviews are still key parts of security programs, even for organizations that are pushing out changes several times a day.
Following pen testing, respondents selected automated code reviews (SAST), compliance audits, manual code reviews and testing by automated application scanning (DAST). In addition, 28% of organizations are scanning containers for security vulnerabilities, and 26% are using open source and third-party component analysis.
Automating Continuous Testing
To move fast, developers need to rely heavily on automated testing that fits into Continuous Integration and Continuous Delivery (CI/CD) cycles. Automated unit testing, which is the backbone of functional testing for Agile development teams, is good for finding regressions, but poor at finding vulnerabilities. Teams need to find other tools and approaches that support rapid cycling, such as:
- SAST/automated code review tools integrated into automated builds or directly into developer’s IDEs to catch mistakes as developers make changes
- Manual code reviews done as part of the code check-in workflow, using code-review management tools to help developers request and respond to reviews and track the results
- Automated software component analysis (SCA) for open source and third-party libraries, integrated into automated builds, and as part of code check-in
- Container vulnerability and security scanning integrated in similar ways
- DAST application scanning run as part of automated functional testing and acceptance testing
Trade-off with Automation
The faster teams move, and the more they rely on automation, the more tradeoffs they need to make. Because not enough time is available to run deep, exhaustive scans or other security tests in continuous testing, organizations need to scan first for the most critical vulnerabilities. Then they need to target recently changed code for incremental testing and rely on smoke tests to catch other critical mistakes. Rules and tests that take too long to run or are too noisy need to be tuned or cut out, leaving holes in test coverage.
This means that periodic pen testing, in-depth manual reviews, configuration auditing, deep scanning and fuzzing are still needed to find errors that escape tight automated loops.
Smoke Tests
Automated, simple security smoke tests should be run after every change—in development and in production—to catch common but dangerous configuration errors and programming mistakes. These tests can be built using popular security test frameworks, such as Gauntlt, BDD-Security or OWASP ZAP, and configuration checkers such as Netflix’s Security Monkey.
Security Testing Personnel
External parties (auditors, pen testers, scanning services) and internal security teams are primarily responsible for security testing and assessments, while development teams and system architects are primarily responsible for corrective actions, according to respondents. See Figure 8.
However, an increasing amount of responsibility is being assigned to cross-functional teams (across dev/ops/sec), and directly to developers—especially in faster organizations.
Vulnerabilities Discovered
Most organizations (60%) find between one and 25 vulnerabilities per month. A small percentage finds more than a thousand per month. See Table 4. But the majority of the problems that are being found are not critical, as shown in the Table 5.
This indicates that teams are wasting time (sometimes a lot of time) dealing with false positives, low fidelity findings and other noise in security testing.
Looking at testing results through a velocity lens shows that moving too fast can create risks when it comes to security testing, a relationship made clear in Figure 9.
Teams with the most rapid development procedures are also finding fewer vulnerabilities. Earlier results indicated that fast-moving organizations are also doing more frequent scanning and testing. This may indicate that they are doing a more superficial job of security assessment, because they need to fit their testing into fast feedback cycles—as we’ve explained, security testing takes time to do right.
Vulnerabilities Repaired
Faster organizations are more likely to fix vulnerabilities than their slower competitors, because the costs and risks of change in faster organizations are generally lower: The more often you do something, the better you get at it. See Figure 10.
What percentage of critical security vulnerabilities does your organization repair satisfactorily and in a timely manner?
Figure 10. Comparing Rate of Change to Speed of Vulnerability Repair
Overall, respondents reported that 41% of serious or critical vulnerabilities are fixed within a week of when they were found. They fix an additional 34% of their vulnerabilities within one month. See Figure 11.
On average, how long does it take for your organization to fix and deploy a patch to a critical application security vulnerability for systems already in use?

In fact, looking back year over year, all organizations are getting faster at fixing vulnerabilities in production, based on PCI’s 30-day patch rule. In 2016, 66% of respondents’ organizations achieved such levels of success, improving to 75% in 2017. See Table 6.
<table>
<thead>
<tr>
<th>Time to Correct a Vulnerability</th>
<th>2016</th>
<th>2017</th>
</tr>
</thead>
<tbody>
<tr>
<td>Same day</td>
<td>6.0%</td>
<td>5.9%</td>
</tr>
<tr>
<td>Next day</td>
<td>7.5%</td>
<td>5.9%</td>
</tr>
<tr>
<td>2–7 days</td>
<td>26.0%</td>
<td>29.6%</td>
</tr>
<tr>
<td>8–30 days</td>
<td>26.0%</td>
<td>33.7%</td>
</tr>
<tr>
<td>31–90 days</td>
<td>14.9%</td>
<td>6.5%</td>
</tr>
<tr>
<td>91–180 days</td>
<td>6.4%</td>
<td>4.7%</td>
</tr>
<tr>
<td>6 months to 1 year</td>
<td>3.2%</td>
<td>1.2%</td>
</tr>
<tr>
<td>More than a year</td>
<td>0.7%</td>
<td>0.6%</td>
</tr>
<tr>
<td>Unknown</td>
<td>8.5%</td>
<td>8.9%</td>
</tr>
</tbody>
</table>
Organizations are completing most (53%) vulnerability repairs through patches or upgrades to the runtime environment, 47% are handled at root cause by secure software development life cycle (SDLC) practices, and another 47% are completed by patching third-party or open source components, as shown in Figure 12. Automated testing and deployment, for example in Continuous Delivery, make this easier and safer to do.
How do you repair discovered vulnerabilities? Select those that most apply.
Note that the 47% of vulnerabilities that are corrected using root cause analysis identify problems in the SDLC. Understanding and fixing problems at the root cause takes time, for both agile and traditional organizations. This may be easier in Agile and DevOps environments because they encourage frequent and blameless retrospection and introspection, as well as continuous improvement. Agile teams should be more attuned to identifying root causes and acting on remediation plans.
Figure 12. Vulnerability Correction
Although much attention and money are invested in technology for secure development and testing, the biggest challenges organizations face in their application security programs involve people, not tools. See Figure 13.
What are your top three challenges in implementing application security for production systems at your organization?
*Indicate the top three in no particular order.*
- Bridging the gap between software development, security and compliance
- Silos between security, development and business units
- Lack of funding or management buy-in
- Lack of application security skills, tools and methods
- Shortage of technical resources to maintain security in production applications
- Lack of integrated security and remediation life-cycle workflow
- Fear of modifying production code (might “break the app”)
- Identifying all applications in the portfolio
- No clear definition of success (metrics, CSFs)
- Poor remediation, workflow and advice for fixing discovered vulnerabilities
- Waiting for service releases to fix problems
- Testing applications containing no source code (e.g., commercial off-the-shelf apps, third-party components)
- Lack of testing support for applications written in legacy languages
- Developing test scenarios or test cases that address security
- Visibility into containers
- Lack of testing support for applications containing new frameworks
- Other
*Figure 13. AppSec Challenges*
Bridging cultural and communications gaps and organizational silos, obtaining management buy-in, and dealing with a lack of security skills are all management problems. But respondents’ organizations are finding ways to overcome some of these problems. See Figure 14.
What are the most successful methods your organization uses to bridge the gap between software development, operations, security and compliance?
Select all that apply.
More than 45% attribute their ability to overcome challenges to adopting more effective testing methods across the SDLC, building cross-functional teams, and encouraging communications across teams and silos. More integrated technology is also playing an important role in breaking down silos: end-to-end testing, automated end-to-end workflows and integrated tools are helping to bridge gaps between teams and reduce risks.
People and process must come first. Technology is simply helping facilitate the workflow of a DevOps culture. These are difficult and deep organizational, cultural and management problems, which aren’t under the control of the security team or engineering teams to solve. Organizations are finding ways to solve these problems by building bridges between engineering and security teams using the following tools:
- Full life-cycle testing, all the way to deployment
- Cross-functional teams across dev/ops/sec
- Communications plans across teams
- Automated end-to-end workflows
- Integrated testing and development tools
These practices are all encompassed under what is being called DevOpsSec or DevSecOps: a collaborative, open approach to integrating engineering, security and compliance teams.
**DevOps and DevSecOps/DevOpsSec**
Breaking down silos, creating cross-functional teams, automating end-to-end workflows and testing, open communications, and transparency are all hallmarks of what is being called DevSecOps or DevOpsSec today.
DevOps is about applying Lean and Agile development principles, values, practices and end-to-end automated workflows to the release, deployment and operations of systems. Key ideas in DevOps include the following:
- Small, self-directing, highly collaborative cross-functional teams across development and operations
- Small, incremental (continuous) improvements
- Teams that are responsible and accountable for development, deployment, operations and support for the life of the system or service (you build it, you run it)
- Continuous delivery and deployment, where automated and repeatable build and deployment pipelines promote changes from development through testing, staging and production
- Infrastructure as code, which defines infrastructure configuration in code, and makes changes to configuration using the same type of automated pipelines as application changes
DevSecOps, or DevOpsSec, or sometimes Rugged DevOps, brings security (and where possible compliance, known as continuous compliance in DevOps circles) into the same model to create collaborative, open, transparent teams of people working across development, operations and security to understand and solve security problems together.
DevOps is moving faster, and security teams can leverage the speed of change to get security patches out more quickly and cheaply, closing the window of vulnerability to attack. Speed becomes an important security advantage.
As organizations continue to speed up, they are fundamentally changing how people think and work. Instead of big, long-running software development projects with waterfall handoffs between silos and outsourced maintenance, more work is being delegated to small, self-directed engineering teams responsible for building and operating services (or microservices and containers), taking pressure off operations/release management and security teams, and also eliminating bottlenecks in the workflow.
Faster decision making and faster delivery mean that security specialists need to get closer to engineering, so that engineers and security personnel can work together to identify and understand risks and manage them on a continuous basis.
This calls for people, process and technology changes that will bring dev/ops/sec together in cross-functional teams, providing more touchpoints, visibility, standardized workflows and transparency. However, this creates a critical scaling problem for AppSec programs in which skills are already in high demand. You will also need to push more responsibility for security directly to development and engineering teams, giving them training so that they understand more about AppSec risks and how to deal with them, and finding them automated tools that fit into how they actually think and work: iteratively, incrementally and rapidly. In that sense, security teams become enablers and coaches instead of enforcers and blockers.
Jim Bird, SANS analyst and co-author of DEV534 Secure DevOps, is an active contributor to the Open Web Application Security Project (OWASP) and a popular blogger on agile development, DevOps and software security at his blog, “Building Real Software.” He is the CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company's technology organization and information security program. Jim is an experienced software development professional and IT manager, having worked on high-integrity and high-reliability systems at stock exchanges and banks in more than 30 countries. He holds PMP, PMI-ACP, CSM, SCPM and ITIL certifications.
Barbara Filkins, a senior SANS analyst, holds several SANS certifications, including the GSEC, GCIH, GCPM, GLEG and GICSP, the CISSP, and an MS in information security management from the SANS Technology Institute. She has done extensive work in system procurement, vendor selection and vendor negotiations as a systems engineering and infrastructure design consultant. Barbara focuses on issues related to automation—privacy, identity theft and exposure to fraud, as well as the legal aspects of enforcing information security in today's mobile and cloud environments, particularly in the health and human services industry, with clients ranging from federal agencies to municipalities and commercial businesses.
Eric Johnson, the Application Security Curriculum product manager at SANS, is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. A senior security consultant at Cypress Data Defense, Eric’s experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research and developing security tools. He currently holds the CISSP, GWAPT, GSSP-.NET and GSSP-Java certifications.
Frank Kim leads the management and software security curricula for SANS, developing courses on strategic planning, leadership and application security. He is also a SANS certified instructor, helping to shape, develop and support the next generation of security leaders. Previously, Frank served as CISO at the SANS Institute, leading its information risk function, and executive director of cybersecurity at Kaiser Permanente, where he built an innovative security program to serve one of the nation's largest not-for-profit health plans and integrated healthcare provider. Currently, as founder of ThinkSec, a security consulting and CISO advisory firm, Frank helps leaders develop business-driven security programs.
SANS would like to thank this survey’s sponsors:
- Rapid7
- Synopsys®
- Tenable
- Veracode
- WhiteHat Security
## Upcoming SANS App Sec Training
<table>
<thead>
<tr>
<th>Event Name</th>
<th>Location</th>
<th>Dates</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS Austin Winter 2017</td>
<td>Austin, TX</td>
<td>Dec 04, 2017 - Dec 09, 2017</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Security East 2018</td>
<td>New Orleans, LA</td>
<td>Jan 08, 2018 - Jan 13, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam January 2018</td>
<td>Amsterdam, NL</td>
<td>Jan 15, 2018 - Jan 20, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Community SANS San Jose DEV534</td>
<td>San Jose, CA</td>
<td>Jan 29, 2018 - Jan 30, 2018</td>
<td>Community SANS</td>
</tr>
<tr>
<td>Community SANS Indianapolis DEV534</td>
<td>Indianapolis, IN</td>
<td>Feb 05, 2018 - Feb 06, 2018</td>
<td>Community SANS</td>
</tr>
<tr>
<td>Cloud Security Summit & Training 2018</td>
<td>San Diego, CA</td>
<td>Feb 19, 2018 - Feb 26, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Community SANS Seattle DEV534</td>
<td>Seattle, WA</td>
<td>Feb 26, 2018 - Feb 27, 2018</td>
<td>Community SANS</td>
</tr>
<tr>
<td>SANS San Francisco Spring 2018</td>
<td>San Francisco, CA</td>
<td>Mar 12, 2018 - Mar 17, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS 2018</td>
<td>Orlando, FL</td>
<td>Apr 03, 2018 - Apr 10, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>RSA Conference 2018</td>
<td>San Francisco, CA</td>
<td>Apr 11, 2018 - Apr 16, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Baltimore Spring 2018</td>
<td>Baltimore, MD</td>
<td>Apr 21, 2018 - Apr 28, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Community SANS New York DEV522</td>
<td>New York, NY</td>
<td>Apr 23, 2018 - Apr 28, 2018</td>
<td>Community SANS</td>
</tr>
<tr>
<td>SANS Security West 2018</td>
<td>San Diego, CA</td>
<td>May 11, 2018 - May 18, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Northern VA Reston Spring 2018</td>
<td>Reston, VA</td>
<td>May 20, 2018 - May 25, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS OnDemand</td>
<td>Online</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
<tr>
<td>SANS SelfStudy</td>
<td>Books & MP3s Only</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/2017-state-application-security-balancing-speed-risk", "len_cl100k_base": 8829, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 58623, "total-output-tokens": 10505, "length": "2e13", "weborganizer": {"__label__adult": 0.0004100799560546875, "__label__art_design": 0.00032401084899902344, "__label__crime_law": 0.0013933181762695312, "__label__education_jobs": 0.0020351409912109375, "__label__entertainment": 9.91225242614746e-05, "__label__fashion_beauty": 0.0002015829086303711, "__label__finance_business": 0.0022449493408203125, "__label__food_dining": 0.0002644062042236328, "__label__games": 0.0008749961853027344, "__label__hardware": 0.001308441162109375, "__label__health": 0.000579833984375, "__label__history": 0.00019156932830810547, "__label__home_hobbies": 0.00011587142944335938, "__label__industrial": 0.0005841255187988281, "__label__literature": 0.00020420551300048828, "__label__politics": 0.0003552436828613281, "__label__religion": 0.0003075599670410156, "__label__science_tech": 0.048126220703125, "__label__social_life": 0.00013637542724609375, "__label__software": 0.0311431884765625, "__label__software_dev": 0.908203125, "__label__sports_fitness": 0.00025272369384765625, "__label__transportation": 0.00040221214294433594, "__label__travel": 0.00017714500427246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44039, 0.02879]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44039, 0.12258]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44039, 0.93858]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 236, false], [236, 3132, null], [3132, 4305, null], [4305, 5942, null], [5942, 7028, null], [7028, 8277, null], [8277, 9051, null], [9051, 11742, null], [11742, 13214, null], [13214, 16412, null], [16412, 17580, null], [17580, 18703, null], [18703, 21386, null], [21386, 23302, null], [23302, 24672, null], [24672, 26076, null], [26076, 28255, null], [28255, 28749, null], [28749, 29758, null], [29758, 30225, null], [30225, 31644, null], [31644, 32659, null], [32659, 34088, null], [34088, 34952, null], [34952, 37214, null], [37214, 38908, null], [38908, 41546, null], [41546, 41658, null], [41658, 44039, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 236, true], [236, 3132, null], [3132, 4305, null], [4305, 5942, null], [5942, 7028, null], [7028, 8277, null], [8277, 9051, null], [9051, 11742, null], [11742, 13214, null], [13214, 16412, null], [16412, 17580, null], [17580, 18703, null], [18703, 21386, null], [21386, 23302, null], [23302, 24672, null], [24672, 26076, null], [26076, 28255, null], [28255, 28749, null], [28749, 29758, null], [29758, 30225, null], [30225, 31644, null], [31644, 32659, null], [32659, 34088, null], [34088, 34952, null], [34952, 37214, null], [37214, 38908, null], [38908, 41546, null], [41546, 41658, null], [41658, 44039, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44039, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44039, null]], "pdf_page_numbers": [[0, 0, 1], [0, 236, 2], [236, 3132, 3], [3132, 4305, 4], [4305, 5942, 5], [5942, 7028, 6], [7028, 8277, 7], [8277, 9051, 8], [9051, 11742, 9], [11742, 13214, 10], [13214, 16412, 11], [16412, 17580, 12], [17580, 18703, 13], [18703, 21386, 14], [21386, 23302, 15], [23302, 24672, 16], [24672, 26076, 17], [26076, 28255, 18], [28255, 28749, 19], [28749, 29758, 20], [29758, 30225, 21], [30225, 31644, 22], [31644, 32659, 23], [32659, 34088, 24], [34088, 34952, 25], [34952, 37214, 26], [37214, 38908, 27], [38908, 41546, 28], [41546, 41658, 29], [41658, 44039, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44039, 0.24221]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
ff05160c9ff7f95b344e511dc624373c4cfd8d28
|
Textlets: Supporting Constraints and Consistency in Text Documents
Han Han, Miguel Renom, Wendy Mackay, Michel Beaudouin-Lafon
To cite this version:
Han Han, Miguel Renom, Wendy Mackay, Michel Beaudouin-Lafon. Textlets: Supporting Constraints and Consistency in Text Documents. CHI ’20: CHI Conference on Human Factors in Computing Systems, Apr 2020, Honolulu, HI, USA, United States. pp.1-13, 10.1145/3313831.3376804. hal-02867300
HAL Id: hal-02867300
https://hal.archives-ouvertes.fr/hal-02867300
Submitted on 13 Jun 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Textlets: Supporting Constraints and Consistency in Text Documents
Han L. Han Miguel A. Renom Wendy E. Mackay Michel Beaudouin-Lafon
Université Paris-Saclay, CNRS, Inria, Laboratoire de Recherche en Informatique
F-91400 Orsay, France
{han.han, renom, mackay, mbl}@lri.fr
ABSTRACT
Writing technical documents frequently requires following constraints and consistently using domain-specific terms. We interviewed 12 legal professionals and found that they all use a standard word processor, but must rely on their memory to manage dependencies and maintain consistent vocabulary within their documents. We introduce Textlets, interactive objects that reify text selections into persistent items. We show how Textlets help manage consistency and constraints within the document, including selective search and replace, word count, and alternative wording. Eight participants tested a search-and-replace Textlet as a technology probe. All successfully interacted directly with the Textlet to perform advanced tasks; and most (6/8) spontaneously generated a novel replace-all-then-correct strategy. Participants suggested additional ideas, such as supporting collaborative editing over time by embedding a Textlet into the document to flag forbidden words. We argue that Textlets serve as a generative concept for creating powerful new tools for document editing.
Author Keywords
Text editing; Document processing; Reification
CCS Concepts
•Human-centered computing → Graphical user interfaces; Interaction techniques;
INTRODUCTION
Text editing was once considered a ‘killer app’ of personal computing [6]. Editing text is usually the first skill a novice computer user masters, and all personal computers are sold with a word processor. Many professions require advanced text editing skills to ensure consistent use of terms and expressions within structured documents, such as contracts, patents, technical manuals and research articles.
For example, lawyers begin each contract with a list of defined terms, and must use them consistently thereafter. This is critical, since ‘minor’ wording changes can have serious legal implications. For example, American patents define “comprises” as “consists at least of” ; whereas “consists of” means “consists only of”, indicating a significantly different scope of protection. Sometimes terms are disallowed, e.g. the US Patent and Trademark Office (USPTO) does not accept “new”, “improved” or “improvement of” at the beginning of a patent title. Word limits are also common, such as the European Patent Office’s 150-word limit for patent abstracts.
Despite their many features, standard word processors do not address all of these professional needs. For example, although spell checking is common, flagging forbidden words or ensuring consistent use of particular terms must be done manually. Real-time counts of words and characters can be displayed for the whole document, but not for a single section.
Text editing was an active research topic in the 1980s, when personal computers and word processors first became mainstream. Although current research focuses on ‘new’ technology, we argue that document editing should also remain a topic of central importance, as it touches the lives of hundreds of millions of users. We take a fresh look, seeking to apply modern interaction design principles to increase the power of expression, while preserving simplicity of interaction.
We focus on a group of ‘extreme’ users—authors of technical documents—and seek to answer the following questions:
1. How do current software tools support professional technical writers?
2. How do professional users manage constraints and consistency when editing technical documents?
3. How can we create tools that better support these needs?
After reviewing the literature, we describe an interview study with contract and patent writers and highlight the problems they face in their professional editing tasks. We then introduce the concept of Textlet, which reifies the notion of selection in text documents. We show examples of novel tools based on this concept to address the needs identified in the study. Next, we describe the design of two prototypes that implement different types of textlets, and report on the use of one prototype as a technology probe to better understand how textlets support selective search and replacement of text. We conclude by arguing that Textlets can serve as a generative concept for creating powerful new tools for document editing.
RELATED WORK
We review research related to both word processing and code editing tools and practices. The latter is a particularly interesting form of technical document that requires professional software developers to manage multiple internal constraints, and the specific tools developed to ensure internal consistency in code text may inform our design. We also discuss the theoretical foundations underlying the design of Textlets.
Text Editing Practices
Text editing was an active research topic in the 1980s when word processors became mainstream. For example, Card et al. [11] modeled expert users’ behavior in manuscript-editing tasks; Tyler et al. [49] investigated the acquisition of text editing skills; and Rosson [42] explored the effects of experience on real-world editing behavior. Others examined paper-based editing practices to improve computer-based text editing [44, 40, 33] and collaborative writing [2, 14, 39].
More recent studies identify issues with modern word processors. For example, Srgaard et al. [45] found that users rarely take advantage of text styles, and argue that this is because styles do not impose restrictions on the document structure. Alexander et al. [1] found that although users often revisit document locations, they seldom use the specific revisitation tools found in Microsoft Word and Adobe Reader. Chapuis et al. [12] examined users’ frustration with unexpected copy-paste results due to format conversion. This work identifies a clear mismatch between the advanced features offered by modern word processors and actual user practice, and highlights the need for new tools and concepts. While the above work focuses on general editing tasks, we are particularly interested in how authors manage constraints and ensure consistency when editing structured technical documents.
Tools to Support Text Editing
Researchers have created a variety of text editing tools to support annotation [53, 51, 43], navigation [1, 30, 50] and formatting [38]; as well as distributing editing tasks [7, 47] and taking advantage of a text’s structure [36]. We focus here on copy-paste [46, 8], and search-and-replace [35, 3], both especially relevant to supporting internal document consistency.
Chapuis et al. [12] propose new window management techniques to facilitate copy-paste tasks. Citrine [46] extracts structure from text, e.g. an address with different components, that can be pasted with a single operation. Multiple selection [37] offers smart copy-paste that is sensitive to source and destination selections, while Entity Quick Click [8] extracts information to reduce cursor travel and number of clicks. Cluster-based search-and-replace [35] groups occurrences by similarity, allowing entire clusters to be replaced at once. Beaudouin-Lafon’s [3] instrumental search-and-replace tool highlights all items at once, so users can make changes in any order, not only as they occur in the document.
Commercial applications such as Grammarly¹ check grammar and spelling by suggesting alternative wording, style and tone, among other features. However they do not ensure consistent use of specific terms, e.g. always referring to a party in a contract with a single name. Other software tools automatically generate consistent references, including Mendeley² and EndNote³ for researchers, and Exhibit Manager⁴ for legal professionals. Although automated reference management solves some problems, users still lack flexibility for others, e.g. creating a custom citation format. These tools are separate from the word processor, potentially distracting users from their documents and fragmenting workflow. Our goal, then, is to create unified tools that support user-defined constraints and ensure consistency in text documents.
Code Editing Practices
Code editing has been widely studied, especially copy-paste [26, 24], use of online resources [9] and drawings [13], and performing maintenance tasks [27]. A key challenge that emerges from these studies is how to manage dependencies. For example, Kim et al. [26] found that programmers rely on their memory of copy-pasted dependencies when they apply changes to duplicated code. Ko et al. [27] identified both ‘direct’ dependencies, e.g. going from a variable’s use to its declaration, and ‘indirect’ ones, e.g. going from a variable’s use to the method that computed its most recent value, and proposed ways of visualizing these dependencies in the editor. While technical document constraints are less stringent than in computer code, we hope to exploit certain commonalities.
Tools to Support Code Editing
We see program code as an extreme case of a technical document, with many internal constraints. For example, Toomim et al.’s [48] technique supports editing duplicated code and visualizing links among duplicates. To help programmers use web examples more efficiently, Codelets [41] treat snippets of code examples as ‘first-class’ objects in the editor, even after they are pasted into the code. Kery et al.’s [25] tool for lightweight local versioning supports programmers performing exploratory tasks, while AZURITE [52] lets programmers selectively undo fine-grained code changes made in the editor. Barista [29] supports enriched representations of program code, while Whyline [28] and HelpMeOut [20] support debugging tasks. Our challenge is how to build upon these concepts and tools but for non-programmers who manage less highly constrained technical documents.
Theoretical Foundations
We seek to create generic tools rather than ad hoc solutions, which requires adopting a principled design approach. Beaudouin-Lafon’s [3] Instrumental Interaction model extends and generalizes the principles of direct manipulation, and is operationalized by three design principles [5]: Reification turns commands into first class objects or instruments; polymorphism applies instruments to different types of objects; and reuse makes both user input and system output accessible for later use. Although they have been explored in the context of graphical editing [15, 34]; our focus here is on text editing.
¹ https://grammarly.com
² https://mendeley.com
³ https://endnote.com
⁴ https://exhibitmanager.com
STUDY 1: INTERVIEW WITH LEGAL PROFESSIONALS
Editing technical documents requires a complex editing process [16], especially to maintain the document’s constraints and internal consistency [18]. We conducted critical object interviews [32] to better understand how professionals manage such constraints and consistency in their technical documents.
Participants
We interviewed 12 participants (three women, nine men; aged 24-50). Their occupations include: contract manager, legal affairs director, candidate to a Ph.D. in law, lawyer, patent attorney, and patent engineer. All use Microsoft Word on either the Windows (11/12) or MacOS (1/12) platforms; only one uses the latest 2019 version.
Procedure
All interviews were conducted in English, each lasting from 45-60 minutes. We ran four pilot interviews with colleagues to establish the protocol, then visited participants in their offices and asked them to show us specific examples of their current digital and physical documents. We asked them to describe a recent, memorable event related to editing that document, either positive or negative. The first two authors conducted all interviews, alternating asking questions.
Data Collection
All interviews were audio recorded and transcribed. We also took hand-written notes. We were not allowed to videotape or take photographs for confidentiality reasons.
Data Analysis
We analyzed the interviews using reflexive thematic analysis [10]. We generated codes and themes both inductively (bottom-up) and deductively (top-down), looking for breakdowns, workarounds and user innovations. After interviewing eight participants, the first two authors conducted the first analysis together, grouping codes into larger categories and focusing on participants’ editing behavior. We discussed any disagreements and rechecked the interview transcripts to reach a shared understanding. We also created story portraits [23] to graphically code the data, which helped us engage with the collected data and resolve disagreements. We arrived at the final themes after three iterations.
RESULTS AND DISCUSSION
We identified six themes: maintaining term consistency, managing dependencies by hand, reusing content, visiting and revisiting document locations, managing annotations, and collaboration.
Maintaining term consistency
All participants rely on their memories to maintain consistency across document terms, which are often defined the beginning of the document. This causes problems, e.g. when P7 (legal affair director) struggled to use the full name of a party across the document and P5 (patent attorney) often made the wrong choice between two words with highly similar meanings.
Sometimes terms must be changed, e.g. shifting from British to American English or if the client prefers another word. To avoid introducing inconsistencies, lawyers must update each term and its variations, e.g. singular vs. plural, and adjust verbs (P1), articles (P1,6,7,9) and pronouns (P9) accordingly.
Although all participants use “search and replace” to make consistent edits, most (9/12) avoid “replace all”: “It is too risky.” (P4); “I will not let the computer do it for me.” (P6); and “I prefer to do it manually.” (P5). Instead, they use a one-by-one search-navigate-check-replace strategy to manually replace each term. They ensure correctness by viewing and assessing each term’s context: “We have to conjugate the verb with the subject. It’s like a lot [of work].” (P4) Checking context is also essential for avoiding partial matches, i.e. when the search term matches a subset of a longer word (P3,11), which requires performing additional search-and-replace operations.
In summary, participants maintain consistency across terms primarily by hand, which they find cumbersome and prone to error. Most avoid “replace all” because they do not trust the results and cannot easily check them.
Managing dependencies by hand
We define a dependency as two or more sections of text that must be kept identical or consistent. Most participants (8/12) rely on their memories to manage document dependencies, and synchronize them by hand. We identified three types of dependency problems: managing consistency across pasted copies, numbering items, and managing cross-references.
All patent attorneys (4/4) copy text from the Claims section to the Summary of the Invention when drafting the latter. However, when they change the claims, they often forget to update the summary accordingly: “Because it is not automatically updated with the claims, I can easily forget to update.” (P6).
Patents contain three types of numbering systems (Fig. 1): claim number, claim dependency number (when the current claim depends upon another claim), and reference number (to specific parts of the illustration). Most patent attorneys (3/4) manage these numbers by hand instead of using Word’s cross-reference feature, typically leaving a gap between consecutive references. This lets them add additional numbers later, while ensuring that the reference numbers remain in ascending order.
Most lawyers and patent writers insist on maintaining full control of the text, especially the critically important claims section, even if the process is tedious. Even participants who are comfortable using automatic features do not rely on automatic numbering: P7 said: “most of the time, I prefer if
something can be automatically achieved” yet avoids automatic numbering: “I cannot really tell you why. One reason might be that if I have automatic numbering set up, this would have become paragraph 2 and all the numbering of the claims would have been changed...I would not be very happy.”.
In summary, their key reasons for avoiding automatic numbering include 1) their inability to differentiate automatic from normal numbering, unless they select the text; 2) incorrect display of references, e.g. when items are added to a list, until a manual update is triggered; and 3) invisibility of dependencies after an update, since they lack feedback and cannot be sure if the changes are correct.
Reusing content
All participants reuse previous document elements to create new documents, incorporating text, styles and templates. When copy-pasting a piece of text for reuse, they must often edit the content between copy and paste operations or adapt the format after pasting, e.g. using the brush tool (P6) or a macro (P7). If visual formatting results from applying a style, pasting new text can bring “bad” styles into the document and pollute existing styles: “When you copy-paste into a document, you can import the style of the [original] document. Too many unnecessary styles makes the document heavier and you have to remember which style to use. This is a mess.” (P4)
Most participants (10/12) use templates to create new documents, including pre-written text, preset styles or both. Although useful for writing letters, filling cover pages, generating tables and managing formatting consistency, participants still struggle with formatting issues caused by style conflicts. In summary, they often reuse content, but are not satisfied with the corresponding introduction of format inconsistencies.
Visiting and revisiting document locations
Participants rarely write or edit documents sequentially and often revisit different parts of the document. For example, P7 created a set of keyboard shortcuts to “jump to different parts of the document” because he needs to switch often. This is consistent with Alexander et al.’s findings concerning users’ revisitation behavior [1].
Participants also need better revisitation support when systematically going through the whole document, e.g. incorporating edits one by one or performing search-and-replace tasks. The latter often involves checking an earlier replacement, after the fact. Unfortunately, Word imposes sequential interaction, so users cannot return to the previous replacement: “The problem is that I cannot check. It made the replacement and it goes to the next occurrence, so I don’t see what just happened.” (P7). P8’s workaround to address this problem involves turning on “track changes” to leave an inspectable trace of each replacement. In summary, participants experience problems navigating their documents, especially with respect to tracking recent or oft-visited parts of the document.
Managing annotations
Some participants (4/12) appropriate and customize their tools to support comments and annotation, rather than using the dedicated features of their word processor. For example, P5 uses footnotes to add comments for his clients because he dislikes how the text gets smaller when using Word’s Track Changes. P7 avoids Track Changes altogether and uses different colors to encourage active reading and convey the importance of certain comments to his clients.
For documents with two or more co-authors, some participants (4/12) complained that the Track Changes feature introduces more problems than it solves (P2, P4) and makes it difficult to understand the modifications (P5, P7). Instead, some (3/12) use the comparison function after making changes, to make modifications visible to their clients.
Interestingly, P9 also used the comparison function to ‘cheat’. He modified the document with Track Changes on a Saturday night but did not want his client to know he worked over the week-end. So he accepted all the changes and then compared it to the original document on Monday morning, making it appear that the changes had been made on Monday. In summary, participants find annotation tools frustrating and constraining, and some creatively use other features to meet their needs.
Collaboration
Most participants (11/12) collaboratively edit documents. We categorize their collaboration strategy as branching (versioning and partitioning) and merging. When versioning, participants exchange documents via email and save successive versions to keep track of changes made to the document. They use simple suffixes to identify versions over email, e.g. V1, V2, V3, so documents with similar content hang around and are hard to find again. P12 complained that she created eight versions of the same document even though she made only minor changes. The notion of File Biography [31] could help them manage these issues. Local versioning, explored for code editing in Variolite [25], would also be useful but standard word processors do not support it.
Some participants partition the master document for co-authors to edit and merge it later. The problem in the merge stage is style pollution, as discussed above, due to foreign styles being imported through copy-paste (P4) or forgetting to format text (P2). Because the style panel in Microsoft Word is not displayed by default when users open a document, it is often hidden from users. As a result, formatting and style inconsistencies are often undetected.
When a version of a document is sent out and then returns with proposed changes, participants have to merge these changes into the master document. Even though they use the Track Changes feature of Microsoft Word, they usually make the changes by hand, going through each document and deciding which edits to incorporate. They do not accept all the changes for various reasons: “It might destroy the way [the text] was presented” (P5), “We do not consider all comments” (P6), “[clients'] comments are difficult to understand” (P7), or the changes require other modifications to be made in other parts of the text (P7). In summary, we found that participants manually version their documents, even for minor edits, and merge documents by hand, incorporating changes one by one, as they struggle with style pollution.
Summary
Study 1 shows not only that professional technical writers must maintain consistent use of terms, but also that they manage the resulting dependencies mostly by hand. They struggle to maintain formatting consistency when reusing text and lack tools for keeping track of their navigation within their document, flexibly generating annotations, and collaborating asynchronously. Based on these results and the theoretical framework provided by Instrumental Interaction [3], we propose a general solution to address some of their needs.
Textlets: Reifying Selection
General-purpose word processors such as Microsoft Word have hundreds of features. As we saw in Study 1, even when users know that a feature exists, such as ‘replace all’ or ‘automatic numbering’, they often prefer making changes by hand to stay in control. Rather than proposing specific new features to address the various use cases we observed, we seek a general approach that fits how they actually deal with text.
Word processors rely heavily on the concept of selection: the user selects a piece of text and then invokes a command using a menu, toolbar or keyboard shortcut that affects the content of the selection. However, the selection is transient: selecting a new piece of text causes the previous selection to be lost.
We introduce the concept of textlet as the reification [5] of a text selection into a persistent, interactive, first-class object. A textlet represents a piece of the text document identified as interesting to the user. They can be highlighted in the document itself, listed in a side panel, or visualized through other interface elements, e.g. a scrollbar, for easy access.
To create a textlet, a user simply selects a piece of text and invokes a command, e.g. Ctrl+T. The selected text is highlighted and the textlet is listed in the side panel where a behavior (see below) can be assigned to it.
Textlets can also be created automatically by a higher-level object called a grouplet. For example, to create textlets that represent all the occurrences of a word in the document, the user creates a search grouplet (or searchlet), e.g. with the traditional Find command Ctrl+F. The searchlet appears in the side panel and the user can type the search string. A textlet is automatically created for each match of the search string and appears as an item underneath the searchlet. This list is automatically updated when editing the document or when changing the search string.
The power of textlets comes from the behaviors associated with them. The most basic behavior is to (re)select the piece of text from the textlet, e.g. by double-clicking the textlet representation in the side panel. Other behaviors include the ability to change or automatically generate the content of the text, to change its style, and to attach annotations or additional information, such as character or word count. Creating textlets with different behaviors leverages the power of polymorphism [5] because a single concept (reified text selection) addresses a variety of commands (searching, counting, referencing), providing users with a unifying concept to manage text documents. This slightly extends the definition in [5], which focused on polymorphic instruments.
The rest of this section illustrates the power of textlets by describing how different behaviors can address some of the issues observed in Study 1. Table 1 summarizes the use cases and the solutions we have implemented.
Textlets for Consistent Reuse
Study 1 showed that technical writers often reuse portions of text or entire templates when creating new documents. They rely on copy-paste to incorporate parts of other documents, but this requires precisely (re)selecting the text to be copied.
With textlets, users can create text snippets specifically for reuse, such as common vocabulary and phrases, list templates, or pre-written paragraphs with placeholders. Reusing a snippet simply involves a drag-and-drop or click-based interaction with the textlet. Placeholders can themselves be textlets to highlight the parts that need to be filled in, so that they can be easily identified, selected, and replaced with the proper text.
These snippets can be collected in dedicated documents or embedded into other documents. Study 1 identified collaborative practices where users share a set of constraints and consistency criteria. By collecting reusable textlets in separate documents, they can easily share these documents and facilitate consistency across users and documents.
Textlets for Term Consistency
We observed that technical writers need to go back and forth in their documents to check for consistency or make consistent changes across the document. To that end, they often use the search command, but they do not trust the search-and-replace tool enough to perform replace-all actions blindly, and prefer to check the term and its context before each replacement.
Searchlets, briefly introduced earlier, can address these use cases by automatically searching for all the occurrences of a text in the document. A searchlet is a grouplet that creates occurrence textlets for each match they find in the document. These occurrences are listed under the searchlet in a side panel and automatically updated when the document changes. This supports fast navigation to each occurrence in the document, e.g. with a click on the occurrence in the side panel.
Searchlets support flexible search-and-replace. After specifying a replacement text for the searchlet, the user can replace all occurrences at once, or replace them one by one, in any order. At any time, including after a replace-all, it is possible to revert individual occurrences, giving users full control and visibility over their actions. Multiple searchlets can be active simultaneously, so that users can keep earlier searches around and get back to them later.
When users navigate the document to check for consistency and to make changes, they often lose track of where they were when they started the check. Searchlets facilitate navigation among occurrences, but do not address the need for location tracking in the document. Building on previous work such as Read Wear [21, 1] and Footprints [50], a history grouplet can record recent selections and let the user navigate among them. Previous selections can appear as individual textlets in a side panel or, to save space, the grouplet can display arrows to navigate the history of selections.
This design may seem complex compared to the automatic Textlets for Reference Consistency
in the patent title. Standard word claims in a patent, the number of words in the abstract, and documents. For example, patent offices limit the number of
Textlets for Length Constraints
ing flexibility within a unified interface.
users control what types of numbered items they need, provid-
extensively by contract and patent writers.
dating. It is also more powerful and flexible than the prede-
processors, but it leaves users in control by turning numbered
items and references do not update automatically.
Reference Consistency
Automated numbered lists and cross-references take control away from users. Numbered items and references do not update automatically.
Length Constraints
Standard word processors require selecting text each time to count words in a specific area and get other metrics.
Exploratory Writing
Keeping track of alternatives is difficult. Undo/redo is not adapted to go back and forth between versions.
<table>
<thead>
<tr>
<th>Use Case</th>
<th>Issue</th>
<th>Solution</th>
</tr>
</thead>
<tbody>
<tr>
<td>Consistent Reuse</td>
<td>Recurrent copy-paste to start new documents from scratch requires re-selecting the text in one or more documents.</td>
<td>All textlets save their text, which can be reused using simple actions such as drag-and-drop.</td>
</tr>
<tr>
<td>Term Consistency</td>
<td>Repeatedly navigating across a document using search terms leaves no traces of scroll positions, making it hard to go back and forth.</td>
<td>Searchlets create occurrence textlets that let users navigate by interacting directly with them on the side panel.</td>
</tr>
<tr>
<td>Reference Consistency</td>
<td>Automated numbered lists and cross-references take control away from users. Numbered items and references do not update automatically.</td>
<td>Numberlets are counters that can be manipulated and applied to numbered lists, sections, figures, etc. References to numberlets can be created by copy-pasting them in the document. Item numbers and references are always up to date.</td>
</tr>
<tr>
<td>Length Constraints</td>
<td>Standard word processors require selecting text each time to count words in a specific area and get other metrics.</td>
<td>Countlets add a persistent decoration to the text of interest that displays a word count and updates it as users edit the content.</td>
</tr>
<tr>
<td>Exploratory Writing</td>
<td>Keeping track of alternatives is difficult. Undo/redo is not adapted to go back and forth between versions.</td>
<td>Variantlets store alternative versions of textlets that can be easily retrieved, compared and edited.</td>
</tr>
</tbody>
</table>
Table 1. How different textlet behaviors address some issues observed in Study 1.
Textlets for Reference Consistency
Standard word processors include tools for managing certain types of dependencies automatically, most notably numbered lists and cross-references. Study 1 showed that participants distrust and struggle with automatically numbered lists, and thus avoid automated cross-reference management tools.
Documents often include numbered items such as sections, figures, patent claims or references. Both the numbered items and the references are good candidates for textlets: Both are computed textlets, i.e. their content is computed and updated as the document changes, but the user can still interact with them. A numberlet is a grouplet that creates numbered items and ensures that the number sequence matches the document’s item order. Each numbered item is itself a grouplet for creating and managing textlets representing references to that item.
Numberlets, numbered items and references can be listed in the side panel for easy navigation. Creating new numbered items and new references involves a simple drag-and-drop or clicking on the corresponding textlet.
This design may seem complex compared to the automatic numbering and cross-referencing features of standard word processors, but it leaves users in control by turning numbered items and references into objects that they can see and manipulate while the system maintains consistency during document editing. It is also more powerful and flexible than the predefined types of references offered by standard word processors. For example, Microsoft Word 16 for Mac can cross-reference Headings, Bookmarks, Footnotes, Endnotes, Equations, Figures and Tables, but not Articles or Claims, which are used extensively by contract and patent writers. Numberlets let users control what types of numbered items they need, providing flexibility within a unified interface.
Textlets for Length Constraints
Word count and character count limits are common in technical documents. For example, patent offices limit the number of claims in a patent, the number of words in the abstract, and the number of characters in the patent title. Standard word processors include tools to count words and characters in a selection, but they require users to reselect the text and recount after every modification. Microsoft Word shows the total word count of the entire document and current selection in real time, but counting the characters in, e.g. a section of the document requires selecting the text and bringing up a modal dialog.
Counting textlets, or countlets, solve this problem by counting the number of words or characters in a segment of the document and displaying it in the document itself and/or a side panel. As the user edits the text, the counter updates, avoiding the need for special commands or re-selection. The user can set a threshold above which the textlet will signal that the text is too long. Additional metrics could easily be included, such as the text’s estimated reading time. Such timelets would be useful, e.g. for journalists and authors of video subtitles.
Textlets for Exploratory Writing
Study 1 showed how professional technical writers often need to manage multiple alternatives for parts of a document, before deciding or agreeing on which one to keep. Although standard word processors support change tracking, this is insufficient, since it tracks all edits, not the intermediate versions the user may want to keep. Participants must either make copies of the entire document, or use colored text or comments to list alternatives within the document.
Variant textlets, or variantlets, let users keep track of the changes made to a selection rather than the entire document. We were inspired by Explorable Multiverse Analyses [17], where alternative analyses can be embedded in a research paper and selected by the reader to view them in context. A variantlet saves the original content of the selected text. After editing the text, the user can swap it with the original version for immediate comparison, and swap again with the edited version. More sophisticated behaviors can be added to manage multiple alternatives, such as displaying the alternatives side by side or displaying the changes in a manner similar to the track changes mode of word processors. Variantlets provide greater control on version management by supporting local
A similar concept is featured in Variolite [25] for code editing.
Generative Power
The previous examples show the power of textlets to support a variety of tasks. We have also identified other behaviors for textlets that could be useful for a wider range of use cases:
- Attaching comments, summaries, translations, word-scale graphics [19] or emojis and adding decorations to a textlet, e.g. highlighting or badges, to annotate the document;
- Supporting arbitrary computed content, such as Victor’s Reactive Documents [5], where a textlet is defined by a formula that refers to other textlets, as in a spreadsheet;
- Controlling the style and formatting of the text by associating style attributes with the textlet;
- Crowdsourcing the text of a textlet or a collection of textlets for reviewing or grammar checking, as in Soylent [7]; and
- Organizing textlets freely in a canvas to help analyze or annotate the content of a document
The generative power [4] of textlets comes from the combination of a set of behaviors:
- Navigating to the text of the textlet in the document;
- Selecting the text of the textlet, leveraging all the existing commands that act on the selection;
- Replacing/modifying text either based on user edits or automatically;
- Modifying the style of the text;
- Adding decorations that are not part of the text itself; and
- Representing and manipulating textlets in a separate view, such as a list in a side panel.
Generative power also comes from the ability to create textlets not only directly, by selecting text in the document, but also automatically, by using grouplets that identify and live-update a set of matching textlets. Grouplets let users deal with dynamic collections of text in a concrete way, whereas standard word processors typically offer advanced, specialized commands that users hesitate to learn and use. Although textlets may involve more actions than these specialized commands, we argue that users are more likely to try them, and will save time compared to the manual solutions users resort to.
PROOF-OF-CONCEPT IMPLEMENTATION
In order to demonstrate the concept of textlets, we created a proof-of-concept implementation with four types of textlets: word count (countlets), text variants (variantlets), numbered references (numberlets), and search-and-replace (searchlets). These textlets address multiple use cases described in Study 1.
We created two prototypes as plugins to the ProseMirror web-based word processing toolkit. The first prototype (Fig. 2a) was developed internally as our first proof of concept and implements countlets, variantlets and numberlets. The second prototype (Fig. 2b) implements searchlets and was developed in an iterative process with the participants in Study 2, where it was used as a technology probe [22].
Overall Interface
The main window contains the text document, with a traditional toolbar for basic formatting at the top, and a side panel dedicated to textlets on the right. The panel features a toolbar for creating new textlets and the list of textlets themselves. It also features grouplets, with their list of textlets below them. A textlet is created using any of three techniques:
a) Selecting the text content in the document and clicking a creation tool in the toolbar;
b) Clicking a creation tool in the toolbar and selecting the text content in the document; or
c) Entering a keyboard shortcut.
These techniques are also used to create grouplets, depending upon their type: some grouplets require a text selection, others not, and some may require additional information. Each textlet has a context menu that lets users navigate to the original text in the document, select that text, and delete the textlet. The menu also contains textlet-specific behaviors, such as search and inspector for the searchlet (see below).
Countlets
Our implementation of countlets (Fig. 3) decorates the selected text with a handle at each end. These handles let users change the scope of the textlet. The right handle also displays the word count of the text in the textlet, which is updated in real time as the user edits the content. A right-click on the countlet lets users set a threshold. The counter is displayed in red
---
5 http://worrydream.com/Tangle/
6 http://prosemirror.net
when its value is higher than the threshold. Deleting the textlet simply removes the word count.
Figure 3. Countlet: a textlet for counting words.
Variantlets
Our implementation of variantlets (Fig. 4) supports a single alternative text. When the user creates the variantlet, its content is stored. The user can edit the content, and swap it with the stored one by clicking a button in the side panel representation of the variantlet. The user can thus easily view and edit the two variants. Combining a variantlet with a countlet lets the user instantly compare the two lengths by switching between the two alternatives. A more complete implementation of the variantlet should include an additional button to save additional versions and a way to navigate through the versions and swap any one of them with the selection.
Figure 4. Variantlet: a textlet for editing local versions.
Numberlets
Our implementation of numberlets (Fig. 5) uses grouplets to create counters, new numbered items for a given counter, and new references to a given numbered item. The user creates a new counter by selecting a piece of text that contains a number or a hash sign (#), e.g. Article #. This text serves as a template for the numbering scheme. The new counter appears in the side panel as a button. Clicking this button inserts a new numbered item (the numberlet) at the cursor position, with the proper number. This numberlet is added to the side panel and is also a grouplet: clicking it inserts a reference to that item in the text at the cursor position, as well as the corresponding reference textlet (or refilet) in the side panel.
Numbered items and references are updated when the content of the document changes. The numbering of items follows their order of appearance in the document, and is therefore updated when moving text around. If a numbered item is removed and there are dangling references to it, these references show the error. All updates are immediately visible in both the text and the side panel, ensuring consistent numbering at all times. An additional feature (not implemented) should let users drop a reference textlet below another numbered item to change the reference to that item. This would make it possible to re-attach dangling references.
Searchlets
Our implementation of searchlets (Fig. 6) supports flexible search and replace by extending Beaudouin-Lafon’s search-and-replace instrument [3]. A searchlet is created by clicking the creation tool then specifying the search text, or selecting the search text in the document and clicking the creation tool. Users can also create a blank searchlet and then enter the search string. Enabling the search behavior finds all occurrences of the search text, highlights them in the document and displays the number of occurrence in the panel. The usual “word matching” and “case sensitive” options become available in the menu to refine the search (Fig. 2b).
Navigating Occurrences
Enabling the inspector behavior generates the list of occurrences below the searchlet in the panel, highlights them in the document, and gives access to the replace capability (Fig. 6). Changing the search string or the search settings re-runs the search and updates the list of occurrences underneath it. Editing the document also dynamically updates the list of occurrences: typing the searched text in the document creates a new occurrence and changing the text of an occurrence in the document removes it from the list of textlets if it does not match anymore.
Each occurrence is a textlet that displays the text surrounding the match in the document and updates it in real time. An occurrence can be expanded by clicking it to better show the context (Fig. 6). The user can then click the ellipsis buttons to show more context.
Occurrences can be moved, including under another searchlet, giving users flexibility to organize the search results as they see fit. For example, occurrences of different spellings of a word can be identified with different searchlets and then grouped under one searchlet, after which they can all be replaced at once. When moved, occurrences adopt the color and the new host searchlet. They also “belong” to their new host for the purposes of the replace-all action. In the current implementation, they disappear from the list when the search string or the search settings of the new host are changed.
Replacing Text
Selecting “Replace Matches” in the searchlet context menu (Fig. 2b) shows a text input field for typing a replace string and a button for replacing all occurrences in the list. Each occurrence textlet also includes three buttons that: replace only that occurrence, revert to the previous text, or ignore this occurrence from future replace-all operations. These actions can also be performed in the document itself using keyboard shortcuts.
Replaced occurrences stay in the textlet’s occurrence list until a new search string is entered for that searchlet. This lets users work with the occurrences and make changes to the document after they perform a replace operation without losing track of the positions that were originally matched.
Searchlets extend Beaudouin-Lafon’s previous work [3] by supporting multiple simultaneous searches. Each occurrence is reified as an item in the side panel, which supports additional functions such as disabling an occurrence in a global replace, or moving an occurrence to another searchlet. Our design is also grounded in our observations of the real-world challenges experienced by a group of professional users.
**STUDY 2: SEARCH-AND-REPLACE TEXTLET**
We used our second prototype as a technology probe [22] to evaluate searchlets with an observational study. We did not run a comparative study with, e.g., Microsoft Word as a baseline because many features that we implemented do not exist in Word or are clearly faster, e.g., a persistently-displayed character count with countlets, versus highlighting text and invoking Word’s word count command.
Our goals were to gather feedback, identify potential novel and unexpected uses, and discuss new ideas with the participants in order to refine our design. The study focused on searchlets, but we also showed the participants the other textlets from the first prototype. We incorporated suggestions incrementally so that successive participants used slightly different versions of the probe.
**Participants**
We recruited eight participants: three patent attorneys, one patent inventor (one woman, three men; aged 29-50 who use various versions of Microsoft Word) and four researchers (one woman, three men; aged 24-26 who use LaTeX). Three of the patent attorneys had participated in Study 1. We included researchers because we believe that textlets address the needs of a wider range of users than those in Study 1 and authors of research articles must also manage consistency in their papers.
**Apparatus**
The prototype is a Web application accessed with the participant’s choice browser on their own computer. We provided a 13” MacBook Pro laptop running macOS 10.14 and Firefox 68.0 for participants who did not have a computer at hand. We created two sets of documents to match the participant’s background: two patents and two research papers.
**Procedure**
We started by describing the features of the Textlets prototype, and gave participants 10 minutes to experiment with it. We used a think-aloud protocol and asked participants to perform two similar tasks on two documents: one using the editor of their choice and the other using the Textlets prototype. We counterbalanced for document order across participants.
Each task consisted of three small exercises with increasing difficulty: 1) replace a word by another and then change it back; 2) replace a word by another but only in certain contexts; and 3) replace two words with similar meanings with another word, including all relevant variations. Thus replacing “mouse” with “rodent” also requires changing “mice” with “rodents”.
The two tasks, each with three exercises, took approximately 20 minutes. After an interview, participants completed a short questionnaire. The session ended with a debriefing to identify additional use cases and discuss ideas for improvement. We also showed participants the countlets and variantlets from the first prototype, and asked them to describe scenarios for which they might be useful.
**Data Collection**
We recorded audio, took hand-written notes during the session, and collected the answers to the questionnaire.
**Results and Discussion**
All participants successfully interacted with the textlet prototype and found the tasks representative of their everyday work. The textlet side panel was “faster to use” (P1, P3). It avoids jumping to the main text (P1, P2, P3, P6), so that they can focus on the relevant document parts, thus reducing mental workload. Most participants (6/8) preferred making changes directly with a searchlet over Word’s non-interactive side panel. Two participants (P1, P2) asked for even greater interactivity with searchlets, such as one-by-one replacement directly from the panel, which we added in a later version, and merging two searchlets to apply the same replacement to their occurrences. We added other small improvements based on participants’ feedback, including better colors and icons, and decluttering the textlet interface by using a menu instead of a series of buttons.
**Replace-all-then-correct Strategy**
Most participants (7/8) used a one-by-one search-check-replace strategy with both Microsoft Word and LaTeX: They search for the word, go to each occurrence in the main document to check the context and then perform the replacement, either by clicking a button or retyping.
Participants used a different strategy with textlets, which we characterize as search-overview-replace. They started by creating one or more searchlets, scanned the overview of the occurrences to see the variations and assessed which ones to replace. P1 said: “I can see immediately what variations are in the text [from the side panel]. So I see it will work by replacing all matches”.
The combination of overview and contextual information around each match encouraged participants to spontaneously develop two different strategies for the final search-overview-replace step: Six participants used a replace-all-then-correct strategy, first replacing all occurrences, then checking each replacement in the overview list for errors, which they corrected either with the ‘revert’ button or by retyping in the document. The other two participants (P6, P7) used an ignore-replace-all strategy, first pressing the ‘ignore’ button to skip outliers, then applying ‘replace-all’, similar to the ‘perfect selection’ strategy in [35]. In summary, although participants were reluctant to use replace-all with their regular word processor, they felt comfortable using the searchlets’ replace-all and quickly developed strategies for selective replacement.
Persistent Selection: Keep Track, Individual Undo
Although both Microsoft Word and TexWorks (InTeXEditor) provide an overview list of all search occurrences, they do not track them by position. By contrast, searchlets create persistent occurrences that help users keep track of what happened. P5 felt more confident with the prototype, saying: “Here (pointing at the side panel) I can see the changes in context. It helps me [and] reassures me that I did the right thing.”
Furthermore, searchlets let users check the results of their previous replacements. The overview of occurrences persists in the panel even as users edit the document. This differs from other word processors that clear the search whenever the user types in the document, which forces users to tediously re-enter the search text. For example, P3 said: “I have this list of all the occurrences. When I want to do some replacements, I choose some of them and I keep the whole list that I can always check [in the side panel]. This is quite important...I do not need to proof-read the whole text.”
Because each occurrence is also a textlet with its own history of changes, it can be undone individually and ignored in a replace-all command while still remaining in the overview list. These novel behaviors contributed to most participants (6/8) spontaneously adopting a replace-all-then-correct strategy. For example, P3 said while performing a task: “Maybe it is better to replace all and check the ones that do not work.”. This suggests that making changes persistent, visible and reversible increases users’ trust in the system.
Representing Constraints
One participant suggested embedding a group of searchlets as “a highlight [feature] for forbidden words.”(P4), arguing that making co-authors aware of these words as the document circulates would help them maintain consistency and improve collaborative editing. Textlets can thus embody constraints and serve as an active guideline when embedded in a document.
Feedback for countlet and variantlet
Participants also described situations in which they wanted to use countlets and variantlets. For example, P3 wanted to count the words in patent abstracts: “I think this could be very useful because many times you are going to count words and [the system] does not keep it.” P4 wanted to use variantlets as a local versioning tool: “If you can version one paragraph [instead] of the whole document, it could be very useful. In that case, you can track which part you have changed.”
Scalability and Limitations
A potential limitation of our approach is scalability: Searchlets that generate large numbers of matches or large numbers of textlets and grouplets in the side panel could cause problems when dealing with large documents. We did not observe such problems during the study, probably due to its short-term nature. Several features mitigate scalability issues: users can collapse grouplets, e.g. search results, to save space, or disable them to remove highlighting in the main text. Scrolling between the document and the side panel could also be synchronized, and future textlets could combine behaviors, e.g. countlet +variantlet, to save space.
One participant found that searchlets might be less useful in simple cases with few matches or variations of the same word: “[With] only 3 matches, I would like to change it directly in the main text” (P3). Another participant wanted searchlets to support regular expressions. Both features could easily be supported in a future prototype.
Summary
This study demonstrated the value of searchlets, the most complex textlet we developed, as well as the potential of other textlets. By turning search matches into persistent objects that can manipulate directly, users were willing to use functions, such as replace-all, that they otherwise avoid with traditional word processors. They also spontaneously devised novel strategies and appropriated the textlet concept in unexpected ways, such as embedding searchlets for forbidden words. This study provides evidence for the validity of the textlet concept, and encourages us to further develop and assess the textlets we have developed, as well as design new ones.
CONCLUSION AND FUTURE WORK
Writing technical documents frequently requires following constraints and consistently using domain-specific terms. We interviewed 12 legal professionals and showed that technical writers are reluctant to use advanced features of their word processors, and must instead rely on their memory to manage dependencies and maintain consistent vocabulary within their documents. We introduced a simple, immediately accessible but powerful concept called Textlets, interactive objects that reify text selections into persistent items.
Textlets are a deceptively simple but powerful idea, based on the premise that creating persistent, interactive objects to represent abstract or transient concepts such as the selection can empower users. We showed five use cases where textlets can be applied to support consistent reuse, term and reference consistency, word count constraint, and exploratory writing. We presented two prototypes that implement a proof-of-concept of four textlets, and used one as a technology probe to assess a search-and-replace textlet. All participants successfully used the prototype to perform advanced tasks, and most spontaneously generated a novel replace-all-then-correct strategy. Several also invented novel uses and ideas for new textlets.
Future work will focus on creating a more advanced prototype that can be tested in a longitudinal study. We also plan to design and evaluate new textlets for commenting, formatting and computing text. Our findings on collaboration practices also open the door to investigating the potential of textlets for collaborative work. Beyond the examples illustrated in this article, textlets offer a generative concept for creating powerful new tools for document editing.
ACKNOWLEDGMENTS
This work was partially supported by European Research Council (ERC) grant n° 695464 ONE: Unified Principles of Interaction.
REFERENCES
IEEE Computer Society, Washington, DC, USA, 137–. http://dl.acm.org/citation.cfm?id=938438.938857
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-02867300/file/Textlets-CHI20-HAL.pdf", "len_cl100k_base": 11755, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 46875, "total-output-tokens": 16195, "length": "2e13", "weborganizer": {"__label__adult": 0.0005102157592773438, "__label__art_design": 0.0027599334716796875, "__label__crime_law": 0.0008463859558105469, "__label__education_jobs": 0.0208282470703125, "__label__entertainment": 0.0003447532653808594, "__label__fashion_beauty": 0.0002884864807128906, "__label__finance_business": 0.000995635986328125, "__label__food_dining": 0.00029730796813964844, "__label__games": 0.0012254714965820312, "__label__hardware": 0.0011854171752929688, "__label__health": 0.0005617141723632812, "__label__history": 0.0008687973022460938, "__label__home_hobbies": 0.00021564960479736328, "__label__industrial": 0.0003712177276611328, "__label__literature": 0.00241851806640625, "__label__politics": 0.00047206878662109375, "__label__religion": 0.0006465911865234375, "__label__science_tech": 0.1055908203125, "__label__social_life": 0.0003998279571533203, "__label__software": 0.153564453125, "__label__software_dev": 0.70458984375, "__label__sports_fitness": 0.00020301342010498047, "__label__transportation": 0.0003979206085205078, "__label__travel": 0.0002267360687255859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68802, 0.0253]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68802, 0.41421]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68802, 0.88933]], "google_gemma-3-12b-it_contains_pii": [[0, 1070, false], [1070, 5582, null], [5582, 11795, null], [11795, 17176, null], [17176, 23501, null], [23501, 29993, null], [29993, 37311, null], [37311, 41611, null], [41611, 46466, null], [46466, 52618, null], [52618, 58727, null], [58727, 63996, null], [63996, 63996, null], [63996, 68802, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1070, true], [1070, 5582, null], [5582, 11795, null], [11795, 17176, null], [17176, 23501, null], [23501, 29993, null], [29993, 37311, null], [37311, 41611, null], [41611, 46466, null], [46466, 52618, null], [52618, 58727, null], [58727, 63996, null], [63996, 63996, null], [63996, 68802, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68802, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68802, null]], "pdf_page_numbers": [[0, 1070, 1], [1070, 5582, 2], [5582, 11795, 3], [11795, 17176, 4], [17176, 23501, 5], [23501, 29993, 6], [29993, 37311, 7], [37311, 41611, 8], [41611, 46466, 9], [46466, 52618, 10], [52618, 58727, 11], [58727, 63996, 12], [63996, 63996, 13], [63996, 68802, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68802, 0.02662]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
456b39ecd80df3047fff5d3cfdd92ac1956976e2
|
Troll, a Language for specifying Dice-rolls
Mogensen, Torben Ægidius
Published in:
Proceedings of the 2009 ACM symposium on Applied Computing
DOI:
10.1145/1529282.1529708
Publication date:
2009
Document version
Publisher's PDF, also known as Version of record
Citation for published version (APA):
Troll, a Language for Specifying Dice-Rolls
Torben Ægidius Mogensen∗
DIKU, University of Copenhagen
Universitetsparken 1
DK-2100 Copenhagen O, Denmark
torbenm@diku.dk
ABSTRACT
Dice are used in many games, and often in fairly complex ways that make it difficult to unambiguously describe the dice-roll mechanism in plain language.
Many role-playing games, such as Dungeons & Dragons, use a formalised notation for some instances of dice-rolls. This notation, once explained, make dice-roll descriptions concise and unambiguous. Furthermore, the notation has been used in automated tools for pseudo-random dice-rolling (typically used when playing over the Internet).
This notation is, however, fairly limited in the types of dice-rolls it can describe, so most games still use natural language to describe rolls. Even Dungeons & Dragons use formal notation only for some of the dice-roll methods used in the game. Hence, a more complete notation is in this paper proposed, and a tool for pseudo-random rolls and (nearly) exact probability calculations is described.
The notation is called “Troll”, combining the initial of the Danish word for dice (“terninger”) with the English word “roll”. It is a development of the language Roll described in an earlier paper. The present paper describes the most important features of Troll and its implementation.
Categories and Subject Descriptors
D.3.2 [Programming Languages]: Language Classifications—Specialized application languages; G.3 [Probability and Statistics]: Distribution functions
General Terms
Languages
Keywords
Dice, Probability, Domain-Specific Languages
1. INTRODUCTION
The first thing to ask is: Why would you want a formal notation for specifying dice-rolls?
There are several answers to this:
- Formal notation can give a concise and unambiguous description which can be used to communicate ideas between people. This is, for example, the main motivation for mathematical notation. Games that use dice can (and sometimes do) use formal notation to describe dice-rolls.
- A formal notation is machine readable, which enables use of tools that analyse specifications of dice-rolls. Two types of tools are especially relevant for dice-rolls:
Internet dice-roll servers for playing games that involve dice online or by email. There are many dice-roll servers, but they are each limited to a small number of different types of dice-roll. With a universal notation, a dice-roll server can perform rolls to any specification.
Probability calculators As with any random element used in games, players and game designers are interested in knowing the probability of each possible outcome. A tool that takes a specification of a roll and calculates this is, hence, quite useful.
The concept of using formal notation for dice is not new: One of the first games to use a variety of dice shapes and rolling methods was Dungeons & Dragons from 1974 [3]. The rules introduced a formal notation for dice consisting of the following elements:
\[ d_n \] describes rolling a single dice with sides numbered 1–n. A normal six-sided die is, hence, denoted by \( d_6 \).
\[ m \cdot d_n \] describes rolling a \( m \) dice with sides numbered 1–n and adding up their result. \( 3d6 \), hence, denote rolling three six-sided dice and adding them up to get a result between 3 to 18.
In spite of introducing this notation, most of the rulebook used an alternative, less explanatory notation: An interval of values was shown, and it was up to the player to figure out which dice should be rolled to obtain this interval. For example, “3–18” would denote rolling and adding three six-sided dice, while “2–20” would denote rolling and adding two ten-sided dice.
In later editions, the \( m \cdot d_n \) notation was used more and more, and in the most recent editions, the interval notation is completely eliminated. The \( m \cdot d_n \) notation is also used in...
other role-playing games and has been extended to include addition, so you can write things like \(2d8+1\) or \(d6+d10\). Nevertheless, this extension is far from enough to describe dice-roll methods used in modern role-playing games. Here are some examples of methods from popular games:
- To determine values for personal characteristics such as strength and intelligence, Dungeons & Dragons suggest several methods, one of which is to roll four six-sided dice, discarding the lowest and adding the rest.
- “The World of Darkness” [1] lets a player roll a number of ten-sided dice equal to his ability score. If any of these show 10, additional dice equal to the number of 10s are rolled. If some of these also show 10, this is repeated until no more 10s are rolled. At this point, the number of values exceeding 7 are counted to give the final value of the roll which, consequently, can be any number from 0 upwards.
- “Ironclaw” [4] lets a player roll three dice of different sizes (number of sides) determined by ability scores and count how many exceed a threshold determined by the difficulty of the task.
- “Silhouette” [13] lets a player roll a number of ten-sided dice equal to his ability score and selects the highest of these as the result.
An universal method for dice-rolls needs to be able to describe all of the above, and more. Any Turing-complete programming language can do this, but the result is not necessarily very concise or readable, and it may be impossible to analyse descriptions for such things as probability distributions. Hence, we propose a notation that extends the \(mdn\) notation from Dungeons & Dragons while being readable to non-programmers after a short introduction.
This is not my first attempt at doing so: In 1996, I proposed a notation called “Roll” [7], which attempted the same thing. While it was moderately successful (it was used to calculate probabilities when modifying rules for the new edition of “The World of Darkness” [1]), experiences showed that the notation was not as readable to non-programmners as it could be and that it lacked features to concisely describe some of the more complex rolls. To address this, Roll was over time modified and extended until it had little resemblance to the original. Hence, a new name “Troll” was chosen for the revised and extended language.
One of the key features of Troll (and Roll) is the ability to automatically analyse descriptions for probability. Naive calculation will in many instances be far too time-consuming, so a number of optimisations are used. These are described in section 4.
2. THE BASICS OF TROLL
When you roll several dice at the same dice, you don’t normally care about any specific order, so it is natural to consider the result of a dice-roll as a \(multiset\), i.e., a collection where the order doesn’t matter but where multiplicity of elements do. Multisets of integers are used throughout Troll as the only data structure. Set operations like membership, union, intersection and difference extend in the obvious way to multisets. For details, see [17].
We will call a multiset of integers “a value”. A value containing exactly one number will in some contexts be treated as the number it contains. Some constructs require singleton values and run-time errors will be generated if these are applied to non-singleton values. Likewise, some operations require positive or non-negative numbers or non-empty multisets and will generate errors if applied to something other than that.
Like in the original Dungeons & Dragons notation, \(dn\) means rolling a die with \(n\) sides numbered from 1 to \(n\). However, \(mdn\) is the value (multiset) of \(m\) integers obtained by rolling \(m\) \(n\)-sided dice, and you need an explicit sum operator to add up the numbers. Hence, what in Dungeons & Dragons is written as just \(3d6\) will in Troll be written as \(sum 3d6\).
The reason for this is that, unlike in Dungeons & Dragons, we need to do many other things to the dice than just add them. For example, in Silhouette [13], we need to find the maximum of \(n\) ten-sided dice, which we in Troll can write as \(max n d10\). There are also operators for finding the \(m\) largest numbers in a value and for finding the minimum or \(m\) smallest numbers. For example, the method mentioned earlier for determining personal characteristics in Dungeons & Dragons can be written as \(sum largest 3 d4d6\).
When the number of dice rolled depends on in-game values (such as ability level), it may be useful to use a variable to represent the number of dice. For example, \(max n d10\) will take the number of dice rolled from the variable \(n\). Variables can be bound to values when running Troll or locally inside definitions. We will return to this later.
Anywhere a number can occur in Troll, you can use an expression instead, so it is, for example, perfectly legal (though not very sensible) to write \(d d10\), which would roll a dice the number of sides of which is determined by a \(d10\). An expression like \(d(2d10)\) will, however, cause a run-time error, as the \(d\) operator requires singletons as arguments. The usual arithmetic operators on numbers, too, are defined only on singleton arguments.
Like \(d\) can take a numeric prefix to specify the number of rolls, you can specify multiple rolls of more complex expressions by using the \# operator: \(6#sum 3d6\) specifies a multiset of six numbers each obtained by adding three dice\(^2\). You can use set-like notation like \(\{3,4,3\}\) to build multisets. Union of values is done using the infix operator \&. There has never been any need for multiset intersection or difference when defining rolls, so operators for these are not included in Troll.
2.1 Comparisons, conditionals and bindings
Some dice-roll mechanism, such as that described above for Ironclaw [4], require the rolled numbers to be compared against a threshold and counting of those that do.
Comparison is in Troll done by filters, that compare the elements of a value with a single number and only returns the elements where the comparison holds. For example, \(3<4d6\) rolls four \(d6\) and retains only those numbers \(n\) such that \(3 < n\). You can then count the number of remaining elements using the count operator, which simply returns the number of elements in a value. So, an instance of the Silhouette system where you roll one \(d8\) and two \(d10\) and where the threshold is 5 can be written as \(count 5 \{d8,d10,d10\}\). There are filters for \(=, <, >, \leq, \geq\) and \(\neq\). Note the asym-
\(^1\)The space between \(n\) and \(d\) is required to separate the lexical tokens.
\(^2\)Allowing any expression to be prefixed by a number causes ambiguity, hence the need for an explicit operator.
metric nature of filters: The left-hand operand must be a singleton and is never returned, while the right-hand operand is a value, part of which may be returned.
Filters can also be used in conditionals. The expression if c then e1 else e2 evaluates the expression c. If this evaluates to a non-empty value, e1 is evaluated and returned, otherwise e2 is evaluated and returned. Note that the semantics of filters ensures that, for example, x<y, where x and y are singletons, is non-empty exactly when x < y.
In some cases you want to do two operations on the same die. You can’t just repeat the expression that rolls the die, as you will get two independent rolls. Instead, you need to locally bind the value of a roll to a name and refer to that name repeatedly. The syntax for this is x := e1; e2. While this may look like an assignment, it is a local binding corresponding to, for example, let x = e1 in e2 in Haskell. A local bind can allow us to define the dice-roll for Backgammon, where two identical dice are doubled to give four identical numbers:
\[
\begin{align*}
x &:= d6; \quad y := d6; \quad \text{if } x = y \text{ then } \{x, x, y, y\} \text{ else } \{x, y\}
\end{align*}
\]
2.2 Repetition
Troll has two constructs for repeated die-rolling. The simplest is the repeat loop, which repeats a roll until a condition holds and then returns the value of the last roll (the one that fulfills the condition). For example,
\[
\begin{align*}
\text{repeat } x := \text{d10} \text{ until } x > 1
\end{align*}
\]
keeps rolling a d10 until the result is greater than 1. It will, hence, return a value in the range 2...10. Note that the variable x is locally bound, not assigned to, so a loop like
\[
\begin{align*}
x := 0; \quad \text{repeat } x := x + 1 \text{ until } x = 10
\end{align*}
\]
does not terminate, as the two bound x’s are different. The above is equivalent to
\[
\begin{align*}
x := 0; \quad \text{repeat } y := x + 1 \text{ until } y = 10
\end{align*}
\]
which is, clearly, not terminating. The only thing that can change the value of the bound variable in different iterations is if the expression has a random element, such as a dnx sub-expression. Basically, the exact same roll is repeated until the condition holds (if ever).
For World of Darkness [1], we want to collect all the dice rolled until a certain condition is fulfilled, so we need a different kind of loop. The accumulate loop works like the repeat loop, but instead of returning just the value that fulfills the condition, it returns the union of all the values up to and including this point. With this, the World of Darkness dice roll can be expressed as
\[
\begin{align*}
\text{count } x \leq 7 \# \text{accumulate } x := \text{d10} \text{ until } x < 10
\end{align*}
\]
Note that, like with repeat, the bound variable only changes as a result of different values of random elements in the expression; all iterations perform the same actions until these result in a value that fulfills the condition.
2.3 Other features
The above sections describe only the essential features of Troll. There are many other features including foreach loops, removal of duplicates in a multiset, random selection of n elements in a multiset and even recursive functions. To see a full description of the language, go to the Troll web page [6].
3. DIFFERENCES FROM ROLL
The basic idea of operations on multisets is unchanged from Roll to Troll, so the changes have mainly been addition of more operators, changes to syntax and new loop structures. Below is a summary of the changes and the reasons for them:
- The dice-operator d was in Roll purely a prefix operator, so to roll several identical dice, you needed the # operator, i.e., 3#d6 instead of 3d6. Troll added the mdn form to get closer to the familiar Dungeons & Dragons notation.
- Troll added the set-like notation with curly braces, where Roll required building multisets by using the @ (union) operator. In particular, this has made specification of the empty collection easier, as you can write {} instead of, for example, 0@0.
- Local binding in Roll used a let-in syntax like in ML or Haskell, but users found the assignment-like syntax easier to read and write, especially when several bindings are nested.
- Filters (comparisons) were prefix operators in Roll, so you would write < 3 x instead of 3 > x. The motivation for the prefix syntax in Roll was to emphasise the asymmetric nature of filters, but users found it confusing.
- Roll had a more powerful loop structure that allowed changes in variables between iterations, but it was too complex for most users. Hence, it was replaced by the simpler repeat and accumulate loops, and recursive functions were added to handle the more complicated (and much rarer) cases.
- More predefined operators were added. Some were just abbreviations of common cases, such as min abbreviating least 1, while others would be impossible to emulate without using recursive functions. An example of this is pick, which picks n elements from a multiset.
As an example, the World of Darkness roll described earlier would in Roll be written as
\[
\begin{align*}
\text{count } &\# > 7 \text{ let } x = \text{d10} \text{ in } \text{repeat if } =10 x \text{ then } \text{d10} \text{ else } 0@0
\end{align*}
\]
which is, clearly, less readable.
Nearly all changes have been motivated through discussions with or requests from users of earlier versions. Sometimes to make descriptions easier to write and read and sometimes to make it possible to at all specify a certain dice-roll method.
4. IMPLEMENTATION
Troll is implemented as an interpreter written in Standard ML (specifically, Moscow ML). Two semantics are implemented:
- A random-roll semantics, where the interpreter makes random samples of the described dice-roll.
- A probability semantics, which calculates the probability distribution of the possible outcomes of the described dice-roll.
The random-roll semantics is implemented as a fairly straightforward interpreter using a pseudo-random-number generator seeded by system time, so this will not be detailed further. The implementation of the probability semantics is a bit more interesting, so we will elaborate on this.
If we for the moment ignore loops and recursive functions, a probability distribution for a dice roll is a finite map from outcomes to probabilities, such that the probabilities add up to one. Loops and recursion can make the number of possible outcomes infinite and allow the possibility of non-termination with non-zero probability, so a finite map is insufficient. We will, nevertheless, use finite maps and deal with the infinity issue later.
We will write a finite map as a set of pairs of values and probabilities. For example, the distribution for \( \text{sum } 2d2 \) is:
\[
\{(2,0.25), (3,0.5), (4,0.25)\}
\]
There are, basically, two ways in which we can calculate a finite probability map for a dice-roll definition:
1. We can from each subexpression produce a finite map for its possible outcomes and combine these to find a finite map for the outcomes of the full expression.
2. We can use Prolog-style backtracking to obtain all global outcomes one at a time and count these at the top-level.
We can call the first method \textit{enumeration in space} and the second \textit{enumeration in time}. They have different advantages and disadvantages:
- Enumeration in space needs to enumerate all intermediate values at the same time, so if there are more intermediate values than final values, you can use very large amounts of memory. An example is \( \text{sum } nd10 \), where there are \( O(n^9) \) possible values of \( nd10 \) but only \( O(n) \) possible values for the sum\(^3\). Hence, enumeration in space will use only \( O(n) \) space, while enumeration in space in time will use \( O(n^9) \) space.
- Enumeration in time needs only keep track of one value at any given time (except at the top-level count), so you don’t need very much space.
- Because enumeration in time looks at intermediate values one at a time, it can not recognize that it has seen a value before and will, hence, often repeat calculations that it has already done. Enumeration in space can combine identical values in the finite map by adding up their probabilities and, hence, avoid doing further calculation twice. For example, while \( nd10 \) has \( O(n^9) \) possible values, enumeration by backtracking has to look at \( 10^9 \) combinations. To find the distribution for \( \text{sum } nd10 \), enumeration in time has to look at \( 10^9 \) multisets of \( n \) numbers and add these up using 9 additions for each. Enumeration in space will combine values for \( nd10 \) so it only needs to add up \( O(n^9) \) multisets.
Though space costs more than time on modern computers, reduction from exponential to polynomial time is worth an polynomial increase in space.
\(^3\)To be precise, \( nd10 \) has \( \left(\frac{n + 9}{9}\right) \) possible outcomes.
Even so, enumerating \( O(n^9) \) multisets to find the distribution for \( \text{sum } nd10 \) requires a lot of time and space. Fortunately, we can do better by the following observation:
Since \( \text{sum } (A \cup B) = \text{sum } A + \text{sum } B \), we can find the distribution for \( \text{sum } nd10 \) by first finding the distributions for \( \text{sum } (n-1)d10 \) and \( \text{sum } d10 \) and combine these into a distribution for \( \text{sum } nd10 \). If we apply this recursively, we never need to store a distribution with more than \( O(n) \) values, since there are only \( O(n) \) possible outcomes for \( \text{sum } nd10 \) where \( m < n \).
To exploit such algebraic properties, Troll uses a non-normalised representation for distributions described by the following recursive data-type definition:
\[
D \equiv M! + D \cup D \cup D \uplus D \cup 2 \times D
\]
where \( M \) is a multiset of numbers and \( 0 < p < 1 \). \( M! \) denotes the distribution with only one possible outcome, which is \( M \), \( d_1 \cup d_2 \) combines the outcomes of two distributions by union, \( d_1 \uplus d_2 \) chooses between the outcomes of two distributions with probability \( p \) of choosing from the first, and \( 2 \times d \) is an abbreviation of \( d \cup d \).
We can translate this representation into finite maps by the function \( F \) below:
\[
\begin{align*}
F(M!) &= \{(M, 1)\} \\
F(d_1 \cup d_2) &= \{(M_1 \cup M_2, pq) | (M_1, p) \in F(d_1), (M_2, q) \in F(d_2)\} \\
F(d_1 \uplus d_2) &= \{(M, pq) | (M, q) \in F(d_1) \cup \{(M, (1-p)q) | (M, q) \in F(d_2)\}\} \\
F(2 \times d) &= \{(M_1 \cup M_2, pq) | (M_1, p) \in F(d), (M_2, q) \in F(d)\}
\end{align*}
\]
By operating on this representation, we can exploit two kinds of algebraic properties of functions on multisets:
- A function \( f \) is \textit{linear} if \( f(A \cup B) = f(A) \cup f(B) \).
- A function \( f \) is \textit{homeomorphic} if there exists an operator \( \circ \) such that \( f(A \uplus B) = f(A) \circ f(B) \).
Examples of linear functions include \textit{filter} like \( \& \& \). We can lift a linear function \( f \) to distributions in the following way:
\[
\begin{align*}
F(M!) &= f(M!) \\
F(d_1 \cup d_2) &= f(d_1) \cup f(d_2) \\
F(d_1 \uplus d_2) &= f(d_1) \uplus f(d_2) \\
F(2 \times d) &= 2 \times f(d)
\end{align*}
\]
Examples of homeomorphic functions include \textit{sum} (\( \oplus \) is \( + \)), \textit{count} (\( \oplus \) is +) and \textit{min} (\( \oplus \) is \( \min \)). We can lift a homeomorphic function \( f \) to distributions in the following way:
\[
\begin{align*}
F(M!) &= f(M!) \\
F(d_1 \cup d_2) &= f(d_1) \oplus f(d_2) \\
F(d_1 \uplus d_2) &= f(d_1) \oplus f(d_2) \\
F(2 \times d) &= \oplus^2 f(d)
\end{align*}
\]
\[
\begin{align*}
M! \oplus N! &= (M \oplus N)! \\
(d_1 \uplus d_2) \oplus d_3 &= (d_1 \oplus d_2) \uplus (d_3 \oplus d_3) \\
d_1 \oplus (d_2 \uplus d_3) &= (d_1 \oplus d_2) \uplus (d_1 \oplus d_3) \\
\oplus^2 M! &= (M \oplus M)! \\
\oplus^2 (d_1 \uplus d_2) &= (\oplus^2 d_1) \uplus ((\oplus^2 d_2) | (\uplus^2 (d_1 \uplus d_2)))
\end{align*}
\]
where \( \oplus \) is the operator for the homeomorphism \( f \), \( \ominus \) is \( \oplus \) lifted to union-free distributions and \( \ominus^* d \) is an optimized version of \( d \ominus d \).
4.1 Local bindings
If we locally bind a value to a variable, as in \( x := d6; x \times x \), the two occurrences of \( x \) after the semicolon must always refer to the same value. Hence, in the expression \( x \times x \), the distribution for \( x \) must have only one possible outcome. So the local binding must normalise the distribution for \( x \) to a finite map and then evaluate \( x \times x \) for each possible value and combine the results to a new finite map.
By normalising, we lose all of the optimisations of using a non-normalised representation, so local binding is one of the most costly operations in Troll.
We represent normalised finite maps as a special case of the non-normalised representation where there are no union nodes and where the left operand to a choice node is always of the form \( M! \), i.e., \( N \equiv M! + (M!) \hat{\cdot} N \). Furthermore, the nodes of the form \( M! \) are in strictly ascending order (using a lexicographic ordering on multisets).
4.2 Loops
The first observation is that since all iterations evaluate the same expression and the last such evaluation provides the result, the outcomes of a loop of the form
\[
\text{repeat } x := c \text{ until } c
\]
is a subset of the outcomes of \( c \).
The way we handle \( \text{repeat} \) loops in Troll is to first calculate the distribution \( d \) for \( c \) and then rewrite \( d \) into a form \( d_1 \hat{\cdot} d_2 \), where all outcomes of \( d_1 \) fulfill the condition \( c \) and none of those of \( d_2 \) do. It is now clear that the distribution for \( \text{repeat } x := c \text{ until } c \) is \( d_1 \): Repetition is done until we have a result in \( d_1 \), regardless of how unlikely this is. There is a possibility \( p = 0 \), i.e., that none of the outcomes of \( c \) fulfill \( c \). It would be possible to include nontermination as a possible value in distributions, but since dice rolls are intended to terminate, we instead report an error whenever there is a positive chance of nontermination.
An accumulating loop of the form
\[
\text{accumulate } x := c \text{ until } c
\]
can have an infinite number of possible outcomes. If we, like above, rewrite the distribution for \( c \) into the form \( d_1 \hat{\cdot} d_2 \), we find that the distribution \( d' \) for the loop can be defined by the equation
\[
d' = d_1 \hat{\cdot} (d_2 \oplus d')
\]
This will not have a finite solution unless \( p = 1 \) or \( d_2 = \{\} \).
Instead of trying to work with infinite maps, we have chosen to approximate by unfolding the above equation a finite (but user-definable) number of times and then replacing the remaining recursive reference to \( d' \) by \( d_1 \). If rerolls have probability less than 1, we can get arbitrarily good approximations by increasing the unroll depth.
We are now left with the problem of rewriting \( d \) to \( d_1 \hat{\cdot} d_2 \) given the condition \( c \). We note that the loop introduces a local binding, so we start by normalising \( d \). We then find \( d_1 \hat{\cdot} d_2 \) by the following function:
\[
C(M) = M \| M! ext{ where } c_M = \text{ the distribution for } c \text{ when } x=M \n\]
\[
C(d \| d') = (d_1 \| d_2) \| (d_1 \|_p d_1') \text{ where } d_1' = C(d) \n\]
\[
p1 = pq + (1-p)r \n\]
\[
p2 = pq/p1 \n\]
\[
p3 = (p(1-q))/(1-p1) \n\]
The \( M! \| M! \) in the first line may seem a bit curious, as it is equivalent to \( M! \), but since \( p \) is significant for later calculations, we write the distribution in this redundant way. If there are several possible values for \( x \), i.e., if the normalised distribution contains a choice, we calculate the split for each possible value and combine the results.
\[E(\{\}) = 1\]
\[E(M) = 0 \text{ if } M \neq \{\}\]
\[E(d \| d') = p \cdot E(d) + (1-p) \cdot E(d')\]
\[E(d \cup d') = E(d) \cdot E(d')\]
\[E(2 \times d) = E(d)^2\]
The \( M! \| M! \) in the first line may seem a bit curious, as it is equivalent to \( M! \), but since \( p \) is significant for later calculations, we write the distribution in this redundant way. If there are several possible values for \( x \), i.e., if the normalised distribution contains a choice, we calculate the split for each possible value and combine the results.
The \( M! \| M! \) in the first line may seem a bit curious, as it is equivalent to \( M! \), but since \( p \) is significant for later calculations, we write the distribution in this redundant way. If there are several possible values for \( x \), i.e., if the normalised distribution contains a choice, we calculate the split for each possible value and combine the results.
4.3 Other optimisations
In most places where distributions are built, some local simplifications at the top-level of the tree-structure are attempted. An incomplete list of these is shown below.
\[
d \rightarrow d \n\]
\[
d_1 \| d_2 \rightarrow d_1 \n\]
\[
d_1 \| d_2 \rightarrow d_2 \n\]
\[
d_1 \| (d_1 \| d_2) \rightarrow d_1 \| d_2 \text{ where } p' = p+q-pq \n\]
\[
d \cup d \rightarrow 2 \times d \n\]
\[
\{\} \| d \rightarrow d \n\]
\[
M! \| N! \rightarrow (M \cup N)! \n\]
These add a small overhead to construction of trees, but can sometimes reduce their size dramatically.
5. ASSESSMENT
So, how well does Troll work?
5.1 The language
While Troll is considerably easier for non-programmers and programmers alike to use than Roll, people with no programming experience at all usually find it hard to write definitions that involve conditionals or loops, but expressions like \( \text{sum largest } 3 \text{ d6} \) or \( \text{count } 7<5 \text{ d10} \) are not usually any problem. People with a minimum of programming experience (such as experience from writing formulae in spreadsheets) can usually write definitions using a single loop or conditional and more experienced users can use all of the language.
Usually, people can read and understand definitions that are more complex than they can write.
While a number of game designers around the world use Troll to calculate probabilities, the notation has not been
adopted in any game rules texts. Any one game will usually only use one or two different dice-roll methods, so it is easier for the writers to use a specialised notation or just plain English. Nor has any Internet dice-server used Troll to allow users to describe rolls that are not pre-programmed. So, overall, the success of Troll has almost exclusively been in probability calculation, where there (to my knowledge) are no other similar tools.
5.2 The implementation
Due to the optimisations enabled by the non-normalised representation, calculating the probability distribution of most simple definitions is very fast. For example, calculating the distribution for \( \text{sum } 50d10 \) takes about 0.1 second on my fairly old machine\(^4\) and the calculation for \( \text{count } 7<100d10 \) takes less than 0.01 second.
But when rolls combine a large number of dice and operations that do not distribute well over the non-normalised representation, calculations can take very long and use enormous amounts of memory. This can even happen for dice-roll systems used in actual games. For example, the game “Legend of Five Rings” \([18]\) uses a roll mechanism that can be described in Troll as
\[
\text{sum (largest } M \text{, } \text{#rand} \text{, until } x<10))\]
where \( M \) and \( \text{#rand} \) depend on the situation. With \( M = 3, \text{#rand} = 5 \) and the maximum number of iterations for \( \text{accumulate} \) set to 5, Troll takes nearly 500 seconds to calculate the result, and it gets much worse if any of the numbers increase.
If exact calculation of probabilities takes too long, the random-roll semantics of Troll can be used to generate a large number of samples which can be used to calculate statistics that approximate the probabilities.
6. RELATED WORK
Apart from the notation originated in Dungeons & Dragons \([3]\), most examples of notation for dice-roll are used only in games from a single publisher. Wikipedia \([16]\) lists a few examples of those. Besides Roll \([7]\) and Troll, the only other attempt at defining a universal dice-roll notation that I know of is in a Usenet post from 1992 \([14]\). There are many similarities between this and Troll. Numbers are kept separate until explicitly summed or otherwise operated on and there are clear equivalences to the Troll operations \( \text{sum, d, #, least} \) (but not to the more complex Troll features). A partial implementation of the language was implemented in a dice-roll calculator \([15]\).
There are many examples of extending traditional languages with probabilistic choice operators, e.g., \([9, 2, 12]\), but in the main these do not allow calculation of probabilities, only sampling at runtime.
Other languages are designed for calculating probabilities \([10, 5, 8, 11]\). None of these are specialised for defining dice-rolls, but some similarities to Troll exist. For example, the stochastic lambda calculus \([11]\), like Troll, can be instantiated to both a sampling and probability calculation. The authors note that calculating probabilities for product spaces (which are similar to unions) can take very long time and discuss translating expressions into \textit{measure terms} that keep the parts of a product space separate as long as possible. This has some similarities to using a non-normalised form, but the measure terms are more complex (and potentially more powerful) than the representation used in Troll. The main probabilistic construct in the stochastic lambda calculus is \textit{choose} \( p \mathit{e}_1 \), \( e_2 \), which is equivalent to the \( d_1 \mid p \) \( d_2 \) form in the unnormalised representation in Troll.
The predecessor of Troll, Roll is described in a paper \([7]\) that includes formal semantics for both sampling and probability calculation. An implementation using an unnormalised representation similar to the one described in this paper is described, but not all the optimisations described above were applied to Roll.
7. REFERENCES
\[4\] 3.2GHz Pentium 4, 1.5GB RAM
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/17117511/Troll-SAC.pdf", "len_cl100k_base": 8471, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25784, "total-output-tokens": 10197, "length": "2e13", "weborganizer": {"__label__adult": 0.0010004043579101562, "__label__art_design": 0.0005168914794921875, "__label__crime_law": 0.000919342041015625, "__label__education_jobs": 0.0010309219360351562, "__label__entertainment": 0.0002837181091308594, "__label__fashion_beauty": 0.00045609474182128906, "__label__finance_business": 0.0004096031188964844, "__label__food_dining": 0.001384735107421875, "__label__games": 0.026275634765625, "__label__hardware": 0.001461029052734375, "__label__health": 0.0011577606201171875, "__label__history": 0.000621795654296875, "__label__home_hobbies": 0.00015783309936523438, "__label__industrial": 0.0008587837219238281, "__label__literature": 0.0009636878967285156, "__label__politics": 0.0005521774291992188, "__label__religion": 0.0010786056518554688, "__label__science_tech": 0.041839599609375, "__label__social_life": 0.00015747547149658203, "__label__software": 0.00841522216796875, "__label__software_dev": 0.90869140625, "__label__sports_fitness": 0.0009675025939941406, "__label__transportation": 0.0006165504455566406, "__label__travel": 0.0003867149353027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35936, 0.03151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35936, 0.72463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35936, 0.87826]], "google_gemma-3-12b-it_contains_pii": [[0, 557, false], [557, 4483, null], [4483, 11271, null], [11271, 17219, null], [17219, 23395, null], [23395, 29691, null], [29691, 35936, null]], "google_gemma-3-12b-it_is_public_document": [[0, 557, true], [557, 4483, null], [4483, 11271, null], [11271, 17219, null], [17219, 23395, null], [23395, 29691, null], [29691, 35936, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35936, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35936, null]], "pdf_page_numbers": [[0, 557, 1], [557, 4483, 2], [4483, 11271, 3], [11271, 17219, 4], [17219, 23395, 5], [23395, 29691, 6], [29691, 35936, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35936, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
4ab9c7488e867cd58c74ee7851693c151b1adfa1
|
[REMOVED]
|
{"Source-Url": "https://hal.science/hal-01234653v1/file/gemoc-dag144412_3rdgroup.pdf", "len_cl100k_base": 8600, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 44402, "total-output-tokens": 13208, "length": "2e13", "weborganizer": {"__label__adult": 0.0003390312194824219, "__label__art_design": 0.0005350112915039062, "__label__crime_law": 0.0002682209014892578, "__label__education_jobs": 0.0019321441650390625, "__label__entertainment": 9.751319885253906e-05, "__label__fashion_beauty": 0.00016319751739501953, "__label__finance_business": 0.0002815723419189453, "__label__food_dining": 0.0003075599670410156, "__label__games": 0.0006346702575683594, "__label__hardware": 0.0006427764892578125, "__label__health": 0.0004584789276123047, "__label__history": 0.00040221214294433594, "__label__home_hobbies": 9.894371032714844e-05, "__label__industrial": 0.0004107952117919922, "__label__literature": 0.0005578994750976562, "__label__politics": 0.00031304359436035156, "__label__religion": 0.0005412101745605469, "__label__science_tech": 0.0499267578125, "__label__social_life": 0.00013840198516845703, "__label__software": 0.0099639892578125, "__label__software_dev": 0.93115234375, "__label__sports_fitness": 0.00026488304138183594, "__label__transportation": 0.000507354736328125, "__label__travel": 0.00020742416381835935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56334, 0.03112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56334, 0.59526]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56334, 0.87728]], "google_gemma-3-12b-it_contains_pii": [[0, 1123, false], [1123, 3569, null], [3569, 6709, null], [6709, 9565, null], [9565, 13012, null], [13012, 15492, null], [15492, 18554, null], [18554, 21877, null], [21877, 24576, null], [24576, 27754, null], [27754, 30963, null], [30963, 34222, null], [34222, 37276, null], [37276, 40318, null], [40318, 43524, null], [43524, 46520, null], [46520, 49910, null], [49910, 53402, null], [53402, 56334, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1123, true], [1123, 3569, null], [3569, 6709, null], [6709, 9565, null], [9565, 13012, null], [13012, 15492, null], [15492, 18554, null], [18554, 21877, null], [21877, 24576, null], [24576, 27754, null], [27754, 30963, null], [30963, 34222, null], [34222, 37276, null], [37276, 40318, null], [40318, 43524, null], [43524, 46520, null], [46520, 49910, null], [49910, 53402, null], [53402, 56334, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56334, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56334, null]], "pdf_page_numbers": [[0, 1123, 1], [1123, 3569, 2], [3569, 6709, 3], [6709, 9565, 4], [9565, 13012, 5], [13012, 15492, 6], [15492, 18554, 7], [18554, 21877, 8], [21877, 24576, 9], [24576, 27754, 10], [27754, 30963, 11], [30963, 34222, 12], [34222, 37276, 13], [37276, 40318, 14], [40318, 43524, 15], [43524, 46520, 16], [46520, 49910, 17], [49910, 53402, 18], [53402, 56334, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56334, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
2d68c799c3bcbdc05dcda5166c65321912dfccbc
|
An Auto-Programming Approach to Vulkan
Vladimir Frolov\textsuperscript{1,2}, Vadim Sanzharov\textsuperscript{2}, Vladimir Galaktionov\textsuperscript{1} and Alexandr Scherbakov\textsuperscript{2}
\textsuperscript{1} Keldysh Institute of Applied Mathematics, Miusskaya sq., 4, Moscow, 125047, Russia
\textsuperscript{2} Lomonosov Moscow State University, GSP-1, Leninskie Gory, Moscow, 119991, Russia
Abstract
We propose a novel high-level approach for software development on GPU using Vulkan API. Our goal is to speed-up development and performance studies for complex algorithms on GPU, which is quite difficult and laborious for Vulkan due to large number of HW features low level details. The proposed approach uses auto programming to translate ordinary C++ to optimized Vulkan implementation with automatic shaders generation, resource binding and fine-grained barriers placement. Our model is not general-purpose programming, but is extensible and customer-focused. For a single C++ input our tool can generate multiple different implementations of algorithm in Vulkan for different cases or types of hardware. For example, we automatically detect reduction in C++ source code and then generate several variants of parallel reduction on GPU: with optimization for different warp size, with or without atomics, using or not subgroup operations. Another example is GPU ray tracing applications for which we can generate different variants: pure software implementation in compute shader, using hardware accelerated ray queries, using full RTX pipeline. The goal of our work is to increase productivity of developers who are forced to use Vulkan due to various required hardware features in their software but still do care about cross-platform ability of the developed software and want to debug their algorithm logic on the CPU. Therefore, we assume that the user will take generated code and integrate it with hand-written Vulkan code.
Keywords
GPGPU, Vulkan API, ray tracing, code generation
1. Introduction
Over the past 5 years, dramatic changes have taken place in the field of GPU programming, which put researchers and developers of deep learning algorithms, computer vision and computer graphics around the world in a difficult situation: widely used and convenient cross-platform technologies such as OpenCL, OpenMP do not provide access to the latest capabilities of modern GPU (indirect dispatch, command buffers, ray tracing, texture compression, and many other). Technologies which do provide enough hardware features are proprietary (CUDA, OptiX) or difficult/laborious (Vulkan, DX12, Metal). In practice, developers have to support several variants of the same algorithm for different GPUs to achieve the desired level of performance and compatibility. Moreover, the differences in the source code (and in performance) can be dramatic. For example, the implementation of image processing can be done via compute shaders or by using the graphics pipeline with hardware alpha blending or graphics pipeline with sub-passes on mobile HW. The essence of an algorithm does not change from how exactly it is implemented on the GPU, but developers should handle different versions and work with low-level details, which usually change for different GPUs. Our goal is to preserve pure algorithmic software description, but at the same time with the ability to use any existing or future Vulkan HW features.
2. Existing Solutions
The number of existing GPU and FPGA programming technologies is significant [1] (and we believe that this indicates the high relevance of the problems outlined in the previous section). It’s possible to classify these technologies into several groups.
**Libraries.** Usually aimed at specific class of developers who need implementations of specific tools or algorithms. This category includes such well-known libraries as TBB, Thrust [2], CUBLAS, IPP, NPP, MAGMA, [3], [4], HPX [5] and many others. Some of them turn into rather powerful software solutions (for example PyTorch [6], TensorFlow [7] and [8]). The main drawback of libraries is their narrow focus and limited capabilities. In addition, many libraries are deliberately deprived of the ability to be cross-platform (for example, TBB, Thrust, IPP, Intel Embree), since they are released by a certain CPU or GPU manufacturer in order to promote their own hardware and that is why they are made non-portable.
**Directive-based Programming Models.** These include technologies such as OpenMP, OpenACC [9] C++ AMP (Microsoft), Spearmint [10], OP2 [11], in works [12-13] and DVM / DVMH [14-16]. This approach has 2 key disadvantages. The first disadvantage is the absence of the control over the operations of copying and distributing data between the CPU and GPU, which is done automatically. This becomes a bottleneck and results in poor performance [17]. In solving this problem, interesting results were achieved in [12], where a software pipeline was used that combines the stages of execution on the CPU and GPU. However, the pipeline cannot always help and in certain situations, memory management and copying must be strictly controlled. The second disadvantage of this group of technologies is the extremely weak support for GPU hardware capabilities. For example, in OpenACC it is impossible to directly use shared memory, warp voting functions, there is no support for textures (which is unacceptable for high-performance and energy-efficient computer vision and computer graphics applications). Other technologies have similar problems as well.
**Skeletal Programming.** This includes such works as SkePU [18, 19], SkelCL [20], SYCL [21] and its derivatives (Intel oneAPI). These technologies target developers actively using STL-like containers in C++. The goal is to have one implementation of the algorithm in C++, which can work efficiently both on the CPU and on the GPU, or hybrid execution (simultaneously on the CPU and GPU). The main advantage over previous group of technologies is higher efficiency for some algorithms (which actively use basic skeletons or their combinations). The disadvantage is a significant deterioration in readability, code maintainability and flexibility, since the implementation of the algorithms has to fit the skeleton which leads to unnaturally looking implementations [22]. In addition, skeletal programming inherits almost all the disadvantages of the previous group. However, it still took an important step forward, since it was the first to apply algorithmic optimizations on a GPU.
**GPGPU.** This group includes CUDA and OpenCL – technologies which are employed by a wide class of professional GPU developers and can provide access to the wide range hardware capabilities of modern GPUs. Nowadays, CUDA is the dominant technology thanks to Nvidia's technological leadership. However, in addition to the lack of cross-platform functionality, it has many drawbacks: CUDA is a very heavy technology with a huge number of functionalities and versions (11 versions at the moment), which do not always work equally well on different Nvidia GPUs. Porting a software system from one version of CUDA to another is often time consuming.
To overcome dependence on the single platform several open-source implementations of CUDA were developed using OpenCL or SIMD CPU instructions [23-25], as well as AMD HIP [26]. But this approach is inherently unsuccessful, - it has a critical problem: the lack of hardware support for certain functionality in OpenCL (or other technology used as a back-end) will lead to the fact that the algorithm either won’t work at all, or will work slowly (due to software emulation of this functionality), which is unacceptable for most GPGPU programming applications. As for OpenCL itself, the main drawback of this technology is the lack of support for the hardware functionality of modern GPUs. For example, the indirect dispatch [27], which wasn’t included in the recently released OpenCL 3.0 standard. This functionality is critical for complex algorithms where control flow is dependent on GPU computation [28].
**Graphics APIs, Vulkan.** This includes technologies such as OpenGL4+, DirectX10 - DirectX11, DirectX12, Metal, Vulkan. GPU programming has changed a lot over the past 5 years with new
hardware features in APIs such as Vulkan, DirectX12, and Metal. Further we will consider only Vulkan, because it is a cross-platform technology, unlike DirectX12 and Metal. Unfortunately, the complexity of developing programs using Vulkan exceeds the development using CUDA or OpenCL up to 10 times [29], which is a consequence of the manual control over many hardware capabilities.
The traditional way of reducing code complexity is to create lightweight helper libraries, which can be general-purpose or specific for each application. However, in this case such an approach doesn't really help much, because the user still has to work with the same Vulkan entities, setting up and connecting the relationships between them. An alternative solution is to create a heavier version of a library or an engine that encapsulates Vulkan entities and tries to handle things automatically (V-EZ, VUH, Kompute). But this method does not greatly distinguish the engine from the work that, for example, the OpenGL driver does. This usually leads to suboptimal implementation and bugs, as the developer largely duplicates the work of the driver (or the other API). The problem is fundamental, because if someone is going to use Vulkan, they need a very specific low-level control on certain functionality that is supposed to be implemented and carefully configured in some places, but not for everything.
**Domain Specific APIs.** This group is comprised of technologies such as Nvidia OptiX [30], Microsoft DirectML [31], AMD MIOpen. OptiX is a specialized technology for accelerating GPU ray tracing applications developed in C++. OptiX includes two key technologies: (1) Hardware-accelerated ray tracing and (2) accelerating virtual function calls using shader tables. These 2 functionalities are also available in Vulkan and DirectX12. Since it is much easier to program in OptiX (and this is the only industrial-level technology of this kind that supports C++), almost all industrial rendering systems have switched to OptiX: Octane, VRay, iRay, RedShift, Cycles, Thea and many others. Although the latest AMD GPUs have hardware support for ray tracing, in fact there is no alternative for users to Nvidia. Analogues (such as [32, 33]) significantly lag behind solutions based on OptiX. This is an example of how programming technology combined with hardware implementation established the monopoly of one GPU manufacturer, preventing the developers of rendering systems of the ability to create cross-platform solutions which is a significant drawback.
**Domain Specific Languages (DSL).** This group targets developers working in specific fields, for whom it is important to achieve maximum simplification and abstraction from the details of the algorithm implementation on the GPU with a good level of performance.
For example, the languages Darkroom [34] and Halide [35-37] are designed to create image processing algorithms. Cross-platform ability in Halide is achieved by implementing a large number of low-level layers, and high efficiency due to the use of optimized filter sequences. One of the key optimizations is based on the idea of reordering operations: if we consider applying two filters sequentially to an image, then often the image can be divided into regions and the entire chain of filters can be applied to each region at once: this decreases L2 cache misses.
The disadvantage of DSL is that algorithms and knowledge written in them are difficult to transfer to other areas and difficult to integrate with the rest of software that does not use a DSL. In addition, the more areas the software solution targets, the more different DSLs will need to be used and stacked, which significantly complicates the development. For example, in modern computer graphics applications, *shaders are essentially a domain-specific language*. Currently in graphics and ray tracing pipelines an algorithm is distributed over several programs (from 2 to 5), supplemented by setting the pipeline state in C++ code. This makes the process of writing a program extremely difficult, since there is no single description of the algorithm, but instead *there are many disparate programs and configurable links between them in different places*.
**Various high-level approaches.** This group includes different unique solutions.
[38] focuses on achieving cross-platform high performance. The goal is to have a single representation of an algorithm that can be translated into an efficient implementation on various computing systems. The means of achievement is the so-called “multidimensional homomorphisms”, a formal description of a problem at a high level that allows expressing computations with parallel patterns that can be implemented in different ways on different hardware. A significant limitation of [38] is the use of OpenCL as a low-level layer which does not provide access to many of the new capabilities of modern GPUs (including mobile ones) due to the limitations of the OpenCL. Next, [38] uses extremely limited DSL, which is a significant drawback.
TaskFlow [39] proposes a model for expressing computations in the form of static and dynamic task graphs for heterogeneous computing systems. Tasks communicate with each other by streams of data along the edges of the graph through queues using the producer-consumer scheme. TaskFlow has the ability to build heterogeneous graphs (using both CPU and GPU) and then pipelined execution similar to [13]. Such a computation model is promising since it can not only be efficiently performed on CPU and GPU, but also can be used for prototyping hardware implementations on FPGA or VLSI. The disadvantage of TaskFlow is that the algorithm must be explicitly described in terms of task graphs, which is not very convenient for development and debugging.
The PACXX compiler [40, 41] uses skeletal programming ideas, but is more convenient for use in existing code. PACXX directly implements modern C++ constructs and translates the use of STL containers into GPU buffers. Unfortunately, PACXX does not provide access to many hardware features (for example, textures) which available even in CUDA and OpenCL. Therefore, from a practical point of view, its advantages over pragmas (for example, OpenACC) are not significant.
The main purpose of Chapel [42] is to achieve cross-platform ability, so that a single description of the algorithm can work efficiently on both the CPU and GPU. In addition, it is positioned as easier to use than CUDA. For this, authors propose new language oriented towards parallel programming. But unlike Halide, it is a general-purpose language. The key disadvantage of this approach is that the user has to port a significant description of the algorithm to this new language, which is usually met with resistance in the industry because it means lack of cross-platform ability.
Taichi [43] is a programming language and an optimizing compiler oriented on applications dealing with sparse data-structures including physical simulations, ray tracing and neural networks. Taichi allows users to write high-level code using proposed language (frontend is embedded in C++) as if they were dealing with ordinary dense multidimensional arrays. The compiler then generates intermediate representation, optimizes it and generates back C++ or CUDA code. Taichi also handles memory management and uses unified memory access feature available in CUDA. The ability to target both CPUs (by using such techniques as loop vectorization) and GPUs (although only using CUDA) is a strong point if this solution. The fact that Taichi targets operations on specific data structures allows it to produce highly optimized and efficient code, but at the same time limits its potential applications. Being a DSL Taichi also shares same drawbacks, however close integration with C++ somewhat alleviates them.
Tiramisu [44] is a polyhedral compiler (that is, considering different optimization options for the same algorithm) that specializes in high-performance code generation for different platforms. At the same time, this compiler has several limitations. First, it only supports a specialized high-level language, which makes it difficult to implement in applications in other languages and increases the time spent on porting algorithms written in other programming languages. Second, it is targeted to cluster computing and has significant limitations in terms of hardware.
The clspv compiler [45] translates OpenCL kernels into an intermediate GPU representation called SPIR-V (used by Vulkan to define shader programs) and thus can be used for Vulkan development. Unfortunately, it only supports compute shaders and is currently officially in the prototype stage (although it is already relatively stable, since it has been in development since 2017).
The Circle compiler appeared 2 years ago, but it wasn't until 2020 that it became focused on GPU programming [46]. It is currently the only C++ compiler in the world that supports the graphics functionality of modern GPUs (graphics pipeline, ray tracing pipeline, mesh shaders). But the development is still in the early stage. Circle is a traditional compiler, which itself has all the drawbacks of the traditional approach: If a developer starts using Circle as the main tool (that is, not only for shaders, but for the entire description of the algorithm), then it becomes dependent on it and assembly for any platform (including mobile systems) is no longer possible without Circle.
2.1. Conclusion on existing solutions
General purpose technologies do not support enough hardware features which hinders the performance and energy efficiency (for example, PACXX, OpenACC, DVMH, OpenCL, CUDA, TaskFlow). Low-level industrial APIs provide such support, but development on them is extremely laborious (Vulkan, Metal, DirectX12). Domain Specific Language (DSL) technologies and languages
(Halide, Optix) are a good solution for both GPUs and even other computing systems (FPGA or ASIC). However, their key disadvantage is that algorithms and knowledge implemented with them are difficult to transfer to other areas and difficult to integrate with the rest of the software that does not use a domain-specific language. There are no technologies which can achieve 2 goals simultaneously: (1) cross-platform ability and (2) accessing specific HW features, because existing solutions don’t have an intermediate layer between high-level algorithm description and its actual implementation. Best results in this direction have been achieved in [43, 44, 38, 39].
3. Proposed solution
The proposed programming technology is not general purpose, but it considers several different fields of application and tends to be customer-oriented. At the same time, unlike, for example, Halide [35-37], it does not use domain-specific languages (DSL), but instead extracts the necessary knowledge from ordinary C++ code. One of the main advantages of our technology is that the input source code is not extended by any new language construction or directives. It is assumed that the input source code is normal hardware agnostic C++ source code which in general can be compiled by any appropriate C++ compiler (with some limitations though). This significantly increases ability to cross-platform development using suggested technology.
To achieve this, we turn the concept of programming technology upside down: instead of making a general-purpose programming technology for building various software systems, we propose an extendable technology that can be customized for a specific class of applications. Therefore, we use pattern-matching to find patterns in C++ code and transform them to efficient GPU code in Vulkan. Patterns are expressed through C++ classes, member-functions and relationships between them.
Before we proceed it is important to note the difference between our technology and most existing parallel programming technologies like CUDA, Taichi, Halide and others: they extend programming language with new constructions or propose new languages in which parallel constructions map to some efficient implementation in the hardware. Our approach is the opposite. First, we do not extend the programming language, but rather limit it. Second, our patterns don’t express hardware features. Instead of that, they express algorithmic and architectural knowledge. Thus, hardware features are the responsibility of the translator, not the user. There could be a lot of patterns in total, but in this work, we have implemented limited number of them. Therefore, we consider implemented patterns by examples.
3.1. Patterns
Patterns are divided into architectural and algorithmic. The architectural pattern expresses architectural knowledge of some part of the software. It determines the behavior of the translator as a whole and, thus, is responsible for an application area (for example, image processing, ray tracing, neural networks, fluid simulation, etc.). Algorithmic patterns express algorithmic knowledge and define a narrower class of algorithms that can have efficient hardware implementations and can be found inside architectural patterns. For example, parallel reduction, data compaction (parallel append to the buffer), sorting, scan (prefix sum), building histogram, map-reduce, etc. Now, let’s consider patterns that we have implemented in the current version of our translator:
- Architectural pattern for image processing. This pattern provides basic GPGPU capabilities. The input source code looks like ordinary C++ code with loops in OpenMP-style except that we don’t actually use directives (pragmas). Instead of that we suppose that there is certain class with a control and kernel functions (listing 1). The kernel functions contain the code that is assumed to be ported to GPU almost “as is”. The control functions are the functions, which call kernel functions, and thus they define the logic of kernel launches and resource bindings.
- Architectural pattern for ray tracing. The goal of this pattern is to provide access to hardware accelerated ray tracing and efficiently call virtual functions on GPU. Therefore, it can be considered as a cross-platform OptiX analogue. The significant difference between this pattern and the previous one is that in the image processing pattern loops are assumed to be placed inside kernel functions.
while in the ray tracing pattern they are assumed to be placed out of control functions (and thus out of kernels too). Therefore, ray tracing pattern is convenient if complex and heavy code is used for each thread or each processed data element, but the data processing loop is straightforward. The image processing pattern is convenient when number of threads (processed data elements) changes during the algorithm and inter-thread communication on GPU is actually needed for implementation. For example, if we need to resize (down sample) image, we can process small version of the image and then upscale it back.
- Algorithmic pattern for parallel reduction. For this pattern, we detect access to class-data variables (members of input class, fig. 1) and generate code for parallel reduction on GPU.
- Algorithmic pattern for parallel append of elements to the buffer and the related pattern of subsequent indirect dispatching. Imagine that you have a member-function which processes input data and appends some data to a vector via “std::vector.push_back(...)”. Now you are going to process selected data in the other function. This time, loop counter depends on “vector.size()” and thus actual number of threads on GPU should be different: it will be known only after first kernel finishes, therefore we have to insert indirect dispatch here.
Listing 1 shows input code example and listings 2 and 3 – simplified output examples.
Listing 1: Example of input source code for the image processing architectural pattern. Calculation of sum of all positive numbers in the array. Kernel function and its dimensions (1D, 2D or 3D) are extracted by analyzing function name (kernel1D_ArraySumm, lines 6-12). Control function is extracted by analyzing its code: if at least one of the kernel functions is called from this function, then it is a control function (CalcArraySumm, lines 3-5). Any class data members which are accessed by kernels are placed in a single “class data buffer” (m_summ, line 14). Access for such variables is further analyzed. If in any kernel writes single variable on different loop iterations (line 11), then we generate parallel reduction code at the end of the shader for this variable (listing 3).
Listing 2: Getting some class as input, our solution generates an interface and implementation for the GPU version of the algorithms, implemented in control functions. Due to the peculiarities of Vulkan
we have to generate 2 functions for each input control function. Thus, for \texttt{CalcArraySumm} we generate two funcs: \texttt{SetInOutFor CalcArraySumm} and \texttt{CalcArraySummCmd}. When the first one is called, it creates descriptor set for input buffer (\texttt{a\_dataBuffer} in the example) and saves it to \texttt{allGeneratedDS[0]}. We have deleted input pointer parameter \texttt{a\_data}. In generated code pointer parameters of control and kernel functions are not used because this time all data is on GPU and they are accessed via descriptor sets in shaders. In this example 2 different shaders were generated: the first one is \texttt{ArraySummInitPipeline} which executes loop initialization (zero sum) and the second one is \texttt{ArraySummPipeline} which executes loop body (listing 3).
1. \texttt{__kernel void kernelID\_ArraySumm(__global const int* a\_data, __global NumbersData* ubo, ...);}
2. \texttt{__local int m\_summShared[256*1*1];}
3. ...
4. \texttt{int number = a\_data[1];}
5. \texttt{if(number > 0)}
6. \texttt{m\_summShared[localId] += number;}
7. ...
8. \texttt{barrier(CLK\_LOCAL\_MEM\_FENCE);}
9. \texttt{m\_summShared[localId] += m\_summShared[localId + 128];}
10. \texttt{barrier(CLK\_LOCAL\_MEM\_FENCE);}
11. \texttt{m\_summShared[localId] += m\_summShared[localId + 64];}
12. \texttt{barrier(CLK\_LOCAL\_MEM\_FENCE);}
13. \texttt{m\_summShared[localId] += m\_summShared[localId + 32];}
14. \texttt{m\_summShared[localId] += m\_summShared[localId + 16];}
15. \texttt{m\_summShared[localId] += m\_summShared[localId + 8];}
16. \texttt{m\_summShared[localId] += m\_summShared[localId + 4];}
17. \texttt{m\_summShared[localId] += m\_summShared[localId + 1];}
18. \texttt{if(localId == 0)}
19. \texttt{atomic\_add(&ubo->m\_summ, m\_summShared[0]);}
20. }
Listing 3: Example of generated shader for the clspv compiler. The original loop body transforms to lines 4—6. It can be seen that access to \texttt{m\_summ} was rewritten to \texttt{m\_summShared[localId]} which is further used in parallel reduction at the end of the shader. Lines 13—17 implements optimized variant of parallel reduction for Nvidia HW assuming warp size is 32 threads. It changes depending on input parameters of our translator. For example, we can turn off optimization or assume smaller warp size (8 for mobile GPUs), or use \texttt{subgroupAdd} instead of 13—17 lines (available only in GLSL).
### 3.2. Code generation
The proposed generator works on the principle of code morphing [47]. The essence of this approach is that, having a certain class in a program and transformation rules we can automatically generate another class with the desired properties (for example, the implementation of the algorithm on the GPU). The transformation rules are defined by mentioned patterns, within which the processing and translation of the current code is carried out. The generated class in inherited from the input class, thus having access to all data and functions of input class.
Input source code is processed via clang and libtooling [48]. Almost all tasks in our translator are done in 2 passes. On the first pass we detect patterns and their parts via libtooling: nested loops inside kernel functions, reduction access, access to class data members and etc. On the second pass we rewrite the source code using clang AST Visitors. The final source code is produced via templated text rendering approach [50]. Thus, our solution is implemented via pure source-to-source transformations and unlike Circle, for example, we don’t work with the LLVM code representation. While this approach has certain limitations (we can’t change input programming language, for example, to Rust or Ada which is easily achieved in the LLVM in general), it also has significant advantages:
1. The generated source code for both shaders (OpenCL C for clspv [45] or GLSL) and host C++ with Vulkan calls looks like a normal code written by hand. It can be \texttt{debugged, changed or combined} with another code (hand written usually) in any way. Thus, unlike many existing programming technologies it is easy to distinguish errors of generator/translator from user errors. This is a problem for OptiX or Circle for example because we can’t see what programming technology actually does with the input code.
2. The ability to generate source code for shaders gives us a huge flexibility by the subsequent shader compiler features because we can easily add different HW features support. The early version of our tool used only clspv [45] for shaders. However, we quickly found that the capabilities of the clspv are not enough for ray queries, virtual functions and many other things. It is possible to add such support to clspv to get desired features in SPIR-V from OpenCL C shader source code, but this is expensive and hard way because both working with SPIR-V and clspv source code requires special knowledge and significant effort. At the same time, adding new HW feature support directly to the generated GLSL source code is relatively easy.
CPU <=> GPU data transfer. As were mentioned in a related work review, many existing solutions solve the problem of data copying automatically. In most cases for software that uses Vulkan this is not satisfactory for many reasons. In the proposed approach, we generate code for executing algorithms on the GPU and performing copying and then let the user independently call this code. The generated function called “UpdateAll” performs this task. If user needs data back on the CPU from some internal data of generated class, he or she could make a new class, which is inherited from the generated one. In this class any additional algorithm or copying functionality can be implemented.
Our solution implements the entire generated algorithm to the GPU, therefore, in general, all generated variables and buffers are located on the GPU. However, since the generated class is inherited from the original one, it also contains all the original variables and vectors on the CPU under their own names. The user either provides his own copy implementation to “UpdateAll” method (via the interface implementation), or uses ours from the library. Accordingly, temporary buffers are either created manually by the user or an implementation provided by us is used to create them. In the same way use may manually clear unnecessary CPU data after UpdateAll method.
4. Experimental evaluation
We evaluated our approach on several applications for which we generated different implementations (GPU v1—v3) using different options of our translator (fig. 1). Unlike traditional compilers these options force our translator to apply different HW features and different actual implementations of the same algorithm on GPU. Therefore, performance difference is significant in some cases. Results of our experiments are presented in tables Table 1 and Table 2. The GPU implementation is usually 30-100 times faster than a multicore CPU version which means performance is on desired level on average. Table 2, on the other hand, demonstrates the high labor intensity of implementing such experiments manually in Vulkan. Thus, performance study using our solution becomes easier.
Reduction samples (#1—#3). Here we demonstrate the ability to detect and generate different implementations of parallel reduction. Although the speedup is not very significant (which is expected on such tasks), it was stable, in contrast to the multithreaded execution on the CPU for which #1 and #2 in average were slower than single threaded (we took the smallest time over several runs). GPU v1 is a cross-platform implementation which uses Vulkan 1.0, doesn’t know warp size and doesn’t use subgroup operations. GPU v2 knows warp size (passed via command line argument) and thus may omit synchronization operations for several last steps of reduction (listing 3). GPU v3 knows warp size and additionally uses subgroup operations.
Figure 1: Example applications of the proposed technology. Bloom filter (top, left), spherical harmonics integration (top middle), guided Non-Local Means denoising (bottom left and bottom middle), path tracing with different materials (for testing virtual functions) and procedural textures (right) and finally NBody simulation is on the top-right corner of the image.
Table 1
Execution time in milliseconds for different algorithms and different generated implementations via proposed technology. Because applications are different, v1—v3 means different optimizations for different samples. It is described in details further. First two rows show time in milliseconds for calculating the sum of 1 million numbers. This task is mapped to parallel reduction on GPU. The third row is spherical harmonic evaluation which is 2D reduction with some math. NLM means guided Non-Local Means filter. For path tracing implementation on the CPU we used Embree ray tracing. CPU used is Intel Core i9 10940X, GPU is Nvidia RTX 2080. For Path tracing 512x512 image was rendered (i.e., 256K paths were traced). '*' means that for path tracing and v3 variant the generated code was finalized by hand due to early stage of GLSL generator in our solution.
<table>
<thead>
<tr>
<th>App/Impl</th>
<th>CPU (1 core)</th>
<th>CPU (14 cores)</th>
<th>GPU v1</th>
<th>GPU v2</th>
<th>GPU v3</th>
</tr>
</thead>
<tbody>
<tr>
<td>(#1) Int Arr. Σ</td>
<td>1.263 ms</td>
<td>0.271 ms</td>
<td>0.095 ms</td>
<td>0.089 ms</td>
<td>0.084 ms</td>
</tr>
<tr>
<td>(#2) Float Arr. Σ</td>
<td>1.420 ms</td>
<td>0.342 ms</td>
<td>0.104 ms</td>
<td>0.096 ms</td>
<td>0.096 ms</td>
</tr>
<tr>
<td>(#3) Sph. Eval.</td>
<td>39.73 ms</td>
<td>2.931 ms</td>
<td>0.399 ms</td>
<td>0.364 ms</td>
<td>0.320 ms</td>
</tr>
<tr>
<td>(#4) NBody</td>
<td>250400 ms</td>
<td>11920 ms</td>
<td>118.0 ms</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>(#5) Bloom</td>
<td>711.8 ms</td>
<td>52.74 ms</td>
<td>0.733 ms</td>
<td>1.420 ms</td>
<td>0.841 ms</td>
</tr>
<tr>
<td>(#6) NLM</td>
<td>88440 ms</td>
<td>6851 ms</td>
<td>422.0 ms</td>
<td>571.1 ms</td>
<td>351.0 ms</td>
</tr>
<tr>
<td>(#7) Path Trace</td>
<td>188.4 ms</td>
<td>14.92 ms</td>
<td>4.790 ms</td>
<td>1.310 ms</td>
<td>0.460 ms*</td>
</tr>
</tbody>
</table>
**Nbody** (#4) is classic GPGPU problem of a quadratic complexity for which we demonstrate 100 times acceleration in comparison to multi-threaded CPU version.
**Bloom and Non-Local Means** (#5, #6). In these samples we demonstrate the ability to change implementation of images from buffers (**GPU v1**) to textures (**GPU v2**) and use half float for texture format (**GPU v3**) which gives essential speed-up. It is interesting to note that for these image processing examples 32-bit float textures were slower than buffer variants. For Bloom buffers were even faster than a half-float textures which is due to the Load/Store access and general texture layout. In this way we show that performance question on GPU is not trivial and require to implement and test different variants of same algorithm. Withing proposed solution these experiments can be automated.
Table 2
Lines of code for different application. The first (C++) column shows original lines count for input class. The second column shows total line number for generated source code. Vulkan (compact)
For Bloom we get 100x times speedup over multi-threaded CPU version and for Non-Local Means it is only 20x, which seems to be suboptimal for such a heavy task. In fact, there could be a lot of optimizations for image processing (at least more aggressive pixel quantization/compression), but we believe Halide [35-37] project did most of them and Taichi did similar for physics simulation [53]. So, we decided to focus our performance investigation on ray tracing. For path tracing the initial generated code (which uses exactly same traversal algorithm) outperforms original CPU variant at factor of 10x. We then replace CPU ray traversal with optimized Embree (table 2) and got only 3x. Then we add hardware accelerated ray tracing in computer shader (12x) and generate single kernel via RTX pipeline which finally gives us 32x over multithreaded CPU path tracing with Embree.
4.1. Path Tracing experiments (#7)
Light transport simulation algorithms involve heavy mathematical models and require extensibility from the framework in which these algorithms are implemented. In existing CPU rendering systems, this usually means object-oriented approach. Therefore, it is not yet enough to accelerate ray tracing. To achieve efficiency on GPU we should study how complex and extendible code can be implemented on GPU. Currently, there are three general approaches:
1. Single-kernel – the whole code for light path evaluation is placed inside a single kernel. There could be different option for optimizations (like OptiX state machines) [50]. This approach is good for relatively simple models, for example in computer games. However, with the growing code base it’s performance dramatically reduces due to branch divergence and register pressure.
2. Multiple-kernel – code is split into several smaller kernels, which communicate by reading/writing data into GPU memory. The necessity to read/write data from/into memory results in significant performance overhead depending on an application [51]
3. Wavefront path tracing. This approach extensively uses sorting and compaction of threads to organize them into several queues which execute different computational kernels. This helps to avoid branch divergence but may result in even bigger performance overhead [52].
Therefore, all of these approaches can be used (and are used) in real life applications and there is no single approach that is strictly better than the others in all cases. Depending on the available hardware and needed features, different implementations may be needed to achieve optimal performance. Even though the implementation in code of all these approaches is significantly different, the essential algorithm (path tracing) and implemented models (BRDFs) stay the same. The actual difference in these approaches is actually how these computer graphics algorithms are translated to the GPU code.
4.2. Adding hardware ray tracing
We implemented the basic path tracing algorithm and several material models on the CPU and used the proposed solution to generate the GPU Vulkan-based implementation with software ray tracing in
a multiple-kernel way \((GPU\ v1)\). The generation was performed using ray tracing pattern previously described in section 3.1. Generated GPU version corresponds to the CPU version and implements naive path tracing algorithm consisting of the following kernel launches:
1. Primary (“eye”) ray generation.
2. Loop until maximum tracing depth is reached:
a. Ray trace kernel, which searches for intersection and stores surface hit (6 floats).
b. Kernel to obtain material Id for hit surface (store 2 floats).
c. Next bounce kernel which performs shading computations and stores the path state: new ray position and direction (8 floats) and accumulated color and throughput (8 floats).
3. Kernel which contributes accumulated color to the output image.
Therefore, the total size of ray pay-load is equal to 24 floats (96 bytes) per thread.
Next, the host part of generated code was modified via virtual function override (we override generated ray tracing kernel call) to use the hardware accelerated ray tracing feature via \(VK\_KHR\_ray\_query\) extension which allows using RTX functionality in the compute shaders (among others). Modification required less than 1000 lines of code (about the same as drawing a single triangle in Vulkan). This is a \(GPU\ v2\). In way we show how generated and hand written code could be connected together (both CPU and GPU parts). Considering kernel launches described above, we simply replace Ray trace kernel with the new one which uses hardware accelerated ray queries.
Finally, we have implemented several variants of exactly the same path tracing setup via full RTX pipeline for performance comparison. We took a part of generated GLSL code (functions of material sampling, etc.) and finalized it by adding to ray tracing pipeline. This is a \(GPU\ v3\). In fact, there were 3 different implementations of \(GPU\ v3\) because RTX itself has many options (fig. 2, first 3 rows).
In all cases path tracing was performed with tracing depth equal to 6 and with 8 samples per pixel (for larger sample numbers Nvidia Nsight runs out of memory for GPU trace). Measurements were made on a geometrically simple scene (about 31k triangles), but featuring a variety of material models - Lambertian, perfect mirror, glossy GGX and a blended material – GGX and Lambertian BRDF mixed with respect to a procedural noise texture mask.
Initially we didn’t plan to generate single kernel version because for offline ray tracing applications it’s not the best option due to significant performance degradation when adding new materials and light source models. We didn’t even plan to generate GLSL code using our generator for logic, opposite we plan to replace specific parts of the algorithm (for example, ray tracing) via separate kernel calls. Interaction via DRAM seems to be natural and common for GPU programs, but it turned out that this approach is rather limited. First, clspv has only basic support for hardware features in shaders. Second, for relatively simple code base single kernel variant can be significantly better because less data is transferred to DRAM from chip (fig. 2). Nevertheless, our \(GPU\ v2\) implementation almost caught up with the most flexible/stable variant of RTX via callable shaders (first row in fig. 2). This one seems to be a wavefront path tracing approach implemented by Nvidia inside RTX pipeline and it’s not cross-platform. Thus, our further studies were related to a question: can we get the same performance as a solution based on callable shaders within the multiple-kernels framework by, for example, regrouping threads on material sampling to avoid branch divergence and high register pressure.
Figure 2: Comparison of performance in millions of paths per second between different variants; 1024x1024; RTX2080. Raygen “single”-kernel variant implements all material models inside ray generation shader, many closest hit kernels variants perform computations for different material models inside different closest hit shaders and many callable kernels – inside different callable shaders invoked from ray generation shader. The last 4 rows is GPU v2 (ray query, with different tailing variants) and GPU v1.
4.3. Performance analysis and asynchronous compute for multiple kernel
A problem of multiple kernel approach is that data which passes between kernels is stored in DRAM. Actually, if the size of the intermediate data is not large and it can fit into L2 cache, the work load of DRAM significantly decreases which we analyzed via Nvidia Nsight tool. Unfortunately, we cannot significantly reduce the number of active threads because barriers between kernels lead to a frequent situation when a previous kernel is still computing in a few threads, but the new one can’t be launched. So, for arbitrary tile size there is a tradeoff between L2 hit rate and amount of required memory and DRAM throughput for multiple kernel approach (fig. 3, blue lines).
In fact, for multiple kernel approach we should try to reduce tile size because less memory will be required for intermediate data and DRAM workload will be reduced. This usually means more complex stuff can be done efficiently in future. To do this efficiently we have to get new independent work to GPU as previous kernel finishes execution. Thus, we decided to submit new work from independent queue using asynchronous compute in Vulkan (fig. 3, orange lines). Let’s say we have 1024x1024 image and we have split it into tiles with the size of 256x256 pixels. We can process the whole image tile by tile, or we can, for example, process two 256x256 tiles asynchronously. For fair comparison we should take tile size twice as big for single queue as it is for two queues (for example, 512x256 for a single queue and 256x256 for two queues) so the actual buffer size would be the same. Even then, having two asynchronous queues shows ~10-12% better performance in the best cases on Nvidia GPU and up to 16% on AMD (fig. 3) over tile-by-tile approach. It can be seen from fig. 4 and 5 that AMD hardware implements asynchronous compute significantly better than Nvidia.
Asynchronous tile-based approach implementation does not require many modifications to the code – we need to create additional queue from a different queue family, record command buffers for each tile using alternating queues (each tile launches the same kernels as described in 4.2) and submit the commands in a multithreaded fashion (so we won’t block on fence synchronization on the CPU). Note that the number of executed command buffers depends linearly on the number of tiles.
Figure 3: Measurements for (left) RTX 2080 with hardware accelerated ray tracing (VK_KHR_ray_query, GPU v2) and (right) AMD Vega 10 without it (GPUv1), 8 samples per pixel, ray tracing depth = 6, total image resolution 1024x1024. DRAM throughput decreases linearly from 90% (1024x1024) to 10% (for 128x128) and even less for smaller tile size. This is not shown on the image. Asynchronous compute (orange lines) shows significant performance increase over simple multiple kernel approach.
Although rendering without splitting the image into tiles for this simple scene is still slightly faster, the difference is not essential (1-2%) and we’ve got significantly better HW metrics for units for proposed tiled rendering (table 3). So, with more complex materials tile splitting may actually become a better option. At the same time our approach reduces memory required for intermediate data up to 8-16 times, depending on the tile size. This can be especially important for MCMC methods (Metropolis Light Transport) where large vectors are stored for each thread.
Table 3
Nvidia RTX 2080, Nsight Graphics Metrics
<table>
<thead>
<tr>
<th>Metric</th>
<th>No tiles</th>
<th>One 512x256 tile</th>
<th>Two 256x256 tiles</th>
</tr>
</thead>
<tbody>
<tr>
<td>VRAM throughput</td>
<td>48.9%</td>
<td>35.3%</td>
<td>23.8%</td>
</tr>
<tr>
<td>L2 hit rate</td>
<td>23.1%</td>
<td>28.9%</td>
<td>48.7%</td>
</tr>
<tr>
<td>L2 hit rate from L1</td>
<td>21.7%</td>
<td>28.4%</td>
<td>47.9%</td>
</tr>
<tr>
<td>CS Warp can’t launch (register limited)</td>
<td>31.6%</td>
<td>15.4%</td>
<td>6.3%</td>
</tr>
<tr>
<td>Average time</td>
<td>35 ms</td>
<td>43 ms</td>
<td>39 ms</td>
</tr>
</tbody>
</table>
Figure 4: Nsight Graphics GPU trace for RTX2080. Path tracing with asynchronous compute queues from different queue families. An asynchronous compute queue on Nvidia (top part of the image) runs like a “background” task. It can be seen that 8 submits from the main queue (bottom part of the image) takes same time than single submit to async compute queue.
Figure 5: Radeon GPU Profiler frame capture for Vega 10. Path tracing with asynchronous compute queues from different queue families. Unlike Nvidia, the workload floats freely between 2 queues.
5. Conclusions
We proposed a solution capable of alleviating the difficulties of porting a computationally intensive algorithm to the GPU using source-to-source translation of ordinary C++ code to C++ with necessary Vulkan API calls and shader code (OpenCL C or GLSL). During the code generation process different optimizations can be applied to create several implementations depending on the problem specifics and/or available hardware. This way we are able to increase GPU developers’ productivity by generating code using Vulkan (which can be quite verbose) and applying complex optimizations automatically to achieve maximum performance. We have shown that generated code could be connected to hand written adding hardware accelerated ray tracing. Finally, using the proposed solution and asynchronous compute in Vulkan we made a performance study for path tracing via multiple kernel approach and proposed an improvement for it which reduces required memory for intermediate data by an order of magnitude and uses GPU memory units in a more efficient way (table 3).
6. Acknowledgments
This work is supported by the Russian Science Foundation (RSF) under grant #21-71-00037.
7. References
[21] SYCL, cross-platform abstraction layer, 2021. URL: https://www.khronos.org/sycl/
[31] DirectML, 2021. URL: https://github.com/microsoft/DirectML
[48] Clang documentation, 2021. URL: https://clang.llvm.org/docs/LibTooling.html
|
{"Source-Url": "http://ceur-ws.org/Vol-3027/paper14.pdf", "len_cl100k_base": 10549, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 57147, "total-output-tokens": 13794, "length": "2e13", "weborganizer": {"__label__adult": 0.0005812644958496094, "__label__art_design": 0.0008001327514648438, "__label__crime_law": 0.0004131793975830078, "__label__education_jobs": 0.0005297660827636719, "__label__entertainment": 0.00013637542724609375, "__label__fashion_beauty": 0.00025463104248046875, "__label__finance_business": 0.0002288818359375, "__label__food_dining": 0.0004353523254394531, "__label__games": 0.00150299072265625, "__label__hardware": 0.004528045654296875, "__label__health": 0.0006680488586425781, "__label__history": 0.00047969818115234375, "__label__home_hobbies": 0.00014162063598632812, "__label__industrial": 0.0008292198181152344, "__label__literature": 0.0002884864807128906, "__label__politics": 0.000377655029296875, "__label__religion": 0.0009598731994628906, "__label__science_tech": 0.093505859375, "__label__social_life": 8.428096771240234e-05, "__label__software": 0.00753021240234375, "__label__software_dev": 0.8837890625, "__label__sports_fitness": 0.0005054473876953125, "__label__transportation": 0.0010366439819335938, "__label__travel": 0.0003025531768798828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57112, 0.05481]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57112, 0.59824]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57112, 0.88341]], "google_gemma-3-12b-it_contains_pii": [[0, 3425, false], [3425, 8292, null], [8292, 13361, null], [13361, 18213, null], [18213, 22690, null], [22690, 25121, null], [25121, 29428, null], [29428, 33068, null], [33068, 36021, null], [36021, 39129, null], [39129, 42816, null], [42816, 45731, null], [45731, 47775, null], [47775, 50444, null], [50444, 54296, null], [54296, 57112, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3425, true], [3425, 8292, null], [8292, 13361, null], [13361, 18213, null], [18213, 22690, null], [22690, 25121, null], [25121, 29428, null], [29428, 33068, null], [33068, 36021, null], [36021, 39129, null], [39129, 42816, null], [42816, 45731, null], [45731, 47775, null], [47775, 50444, null], [50444, 54296, null], [54296, 57112, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57112, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57112, null]], "pdf_page_numbers": [[0, 3425, 1], [3425, 8292, 2], [8292, 13361, 3], [13361, 18213, 4], [18213, 22690, 5], [22690, 25121, 6], [25121, 29428, 7], [29428, 33068, 8], [33068, 36021, 9], [36021, 39129, 10], [39129, 42816, 11], [42816, 45731, 12], [45731, 47775, 13], [47775, 50444, 14], [50444, 54296, 15], [54296, 57112, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57112, 0.08205]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
04d31ee93f7b3e0dd981487544c9503ff90362a3
|
Purifying Causal Atomicity
Benjamin S. Lerner and Dan Grossman
University of Washington
{blerner, djg}@cs.washington.edu
Abstract. Atomicity-checking is a powerful approach for finding subtle concurrency errors in shared-memory multithreaded code. The goal is to verify that certain code sections appear to execute atomically to all other threads. This paper extends Farzan and Madhusudan’s recent work on causal atomicity [1], which uses a translation to Petri nets to avoid much of the imprecision of type-system based approaches, to support purity annotations in the style of Flanagan et al. [2]. Purity avoids imprecision for several key idioms, but it has previously been used only in the type-system setting. Our work is (1) compositional: a different purity analysis could be implemented with minimal extra effort, and similarly another atomicity criterion could be checked without changing the purity analysis, and (2) a conservative extension: the analysis of any program that does not use purity annotations is equivalent to the original analysis.
1 Introduction
Static analysis of lock-based shared-memory multithreaded programs is a valuable tool for finding programming errors or verifying their absence. An important recent trend is toward analyzing higher-level concurrency properties. In particular, instead of detecting data races (e.g., a write to a thread-shared variable not protected by a lock), we can verify that an entire code block is atomic: it appears to happen either all-at-once or not-at-all to any other thread. Atomicity is a common requirement for code blocks, and the absence of data races is neither necessary nor sufficient for atomicity.
Atomicity checking takes a multithreaded program with certain code sections annotated that they should be atomic, which we write \texttt{atomic \{ s \}}, and verifies that \( s \) uses mechanisms such as locks correctly to achieve atomicity. Prior work on static analysis for atomicity checking has used either type-and-effect systems or model-checking techniques. Reachability queries over Petri nets, which our work uses, represent a recent effort in the latter style.
The type-system approach [2–6] uses syntax-directed rules to assign each program statement an atomicity based on Lipton’s theory of movers [7]. Though efficient, elegant, and relatively easy to prove correct, type systems are susceptible to false positives (over-approximations) resulting from (1) the syntactic structure of the code, and (2) the thread-modular assumption that any other code in the program might run in parallel with any atomic section. Model-checking approaches [8–11] can improve precision by modeling the whole program and tracking inter-thread dependencies through shared variables and locks. Using Petri
nets to model programs is particularly convenient because data- and control-
dependencies are modeled directly and atomicity checking can be formulated as
a query over the net’s state-space that existing tools can process.
Our work extends and adapts prior Petri-net work [1] to support purity an-
otations, which previously have been investigated only via type systems [2]. A
pure block \texttt{pure \{ s \}} must either do no writes or terminate “abruptly” by exe-
cuting a \texttt{break} statement. As later examples will demonstrate, pure blocks let us
revise the definition of atomicity to allow several correct coding idioms that oth-
erwise would be considered non-atomic (i.e., atomicity-checking false positives).
However, a sound analysis must ensure pure blocks are, indeed, pure, meaning
they terminate abruptly or have no effect.
Our overall contribution is an atomicity checker supporting purity annota-
tions using Petri nets. This approach avoids the false positives from type sys-
tems and the false positives from idioms requiring purity. We have rigorously
defined our analysis for a small core programming language, implemented it (us-
ing CPN-Tools [12] for building and querying Petri nets), and checked many
examples including all examples in this paper. More specifically, our work has
produced the following insights, results, and contributions:
– We show that the Petri-net model of causal atomicity is strictly more pow-
erful than the type-system model of reducible atomicity. That is, every pro-
gram that type-checks under the system in [13] can pass as causally atomic
under the model in [1]. To our knowledge, this is the first formal treatment
relating the expressiveness of a type-system approach for atomicity checking
to a model-checking approach.
– We show how purity-checking can be encoded in a Petri net using several
key technical insights, such as maintaining lock-sets in thread-local storage
and using colored markings to track whether a pure block has done a write.
– We show that a single Petri net can compute both purity- and atomicity-
checking in a way that is a \textit{conservative extension} of the atomicity analysis
(if a program has no pure-blocks, the checker is equivalent to prior causal-
atomicity checkers) and \textit{compositional} (purity- and atomicity-checking are
essentially orthogonal, allowing variants of each to be developed indepen-
dently). Moreover, we show the combined analysis is more precise than the
union of the two analyses.
Space constraints compel only a high-level overview of our analysis, focusing
on how purity requires several novel extensions over the closely related work of
Farzan and Madhusudan [1]. A companion technical report [14] contains formal
definitions and proofs. Our implementation, including the full translation from
programs to Petri nets, is also available [15]. Section 2 explains the benefits
of purity and Petri nets via examples. Section 3 introduces our core language
and the basics of Petri nets. Section 4 defines our translation from programs
to Petri nets and an atomicity-checker over the resulting net. Section 5 briefly
describes our implementation. Section 6 formally describes how our approach is
more powerful than type systems with purity annotations.
2 Atomicity Analyses by Example
2.1 Reducible atomicity via type systems
Lipton’s theory of movers \cite{7} categorizes statements by how they can be reordered relative to other threads’ statements without affecting the final result. For example, a lock acquisition commutes with a subsequent operation in another thread, as no other thread can use the lock just acquired; we say acquisitions are right movers. Classifying variable accesses depends on their race-freedom: if no race exists, the access can commute in both directions; if there is a race it cannot commute. Sequences of statements with zero or more right-movers followed by at most one non-mover followed by zero or more left-movers always can be reduced to a serial execution; this property is called reducible atomicity.
Flanagan and Qadeer \cite{3} use a type-and-effect system to define a static analysis that checks for reducible atomicity. For example, for the code in Figure 1, where \texttt{t} is a local variable and \texttt{balance} is a global variable normally protected by lock \texttt{m}, the type system can determine the atomic-block is incorrect. Swapping the first two statements fixes the error and atomicity-checking succeeds.
Reducible atomicity has two key limitations later examples demonstrate. First, type-checking follows the syntactic structure of the code, which leads to brittle results: clearly equivalent programs can differ in whether atomicity-checking succeeds. Second, atomicity-checking one thread does not examine the code in other threads to determine more precisely what effects might be observed.
2.2 Causal atomicity via Petri nets
One limitation of mover-based systems comes from over-approximating the effect of variable accesses. In Figure 1, a race condition exists where another thread could change \texttt{balance} between the two accesses; this race makes atomicity checking fail. However, if some program invariant prevents the race condition (sometimes called “higher-order locking”), then the code is actually atomic.
This weakness is the strength of the Petri-net approach of causal atomicity \cite{1}. By examining the state space of whole-program behaviors, it can use additional context to see if the code in Figure 1 is atomic. As an extreme example, checking causal atomicity always succeeds if there is only one thread. More realistically, it can detect that the program in Figure 2 is causally atomic, despite the data races on \texttt{X} and \texttt{Y}. As Section 4 explains, by translating the program to an appropriate Petri net, we can check that neither one of the other threads can tell that it observed an intermediate state of the atomic block.
2.3 Pure-reducible atomicity
Several well-known idioms, such as double-checked locking and waiting on condition variables, are not reducibly atomic.\(^1\) But no intermediate states of these idioms are observable, and their behavior in all cases is indistinguishable from cases where they are indeed reducibly atomic. For these idioms, this more “abstract” approach to atomicity suffices to show program correctness; conversely, if a correctness property fails under this broader notion, it also fails under reducible atomicity. We can make this observation precise using the notion of purity.
In Figure 3, the variable \(x\) is read without holding the protecting lock \(l\), a potential race condition and therefore an “atomic” operation using reducible atomicity. The critical section guarded by \(l\) is also an “atomic” operation; the sequence of two atomic operations is not atomic. However, consider all ways this code can actually run: If the first \(if\)-test succeeds, the code breaks and skips the remaining code; this code path is indeed atomic. Otherwise, the \(if\)-statement does nothing and is followed by the critical section. Crucially, in this latter case, the entire pure block modifies no state. So if control reaches the pure block’s end, the block is equivalent to a no-op, and a no-op followed by an atomic operation is atomic. Flanagan et al. [2] generalized this observation into the notion of pure-reducible atomicity (the authors used the term “abstract atomicity”): If a pure block always either breaks or modifies no state, then atomicity checking can treat it as a no-op. (Note: Without the guard protecting \(x := \text{newX}\), the code still is pure-reducibly atomic, though it is not atomic when run. As noted above, pure-reducible atomicity guarantees correctness only when the abstract and concrete behaviors of the program coincide, which is not true here.)
In the example above, the purity annotation ensures that pure code changes no shared state. In general, it must ensure also that no locks are changed (no initially-held locks are left released, nor initially-unheld locks acquired). In Figure 4, the code acquires lock \(l\) and waits until \(x\) becomes false; we model wait as a release/acquire pair. Such code is never reducibly atomic. But each loop iteration is pure, and every execution of the atomic block is equivalent to one where the loop condition is false even before the block begins: such an execution acquires the lock, skips the loop, executes the body and releases the lock, which is an atomic sequence (assuming body is atomic). Unfortunately, the code must be rewritten to accommodate the syntactic restrictions of the type-system before it will validate as pure-reducibly atomic. As the authors noted, not all uses of wait can be so reorganized, even when such uses should be pure-reducibly atomic.
\(^1\) It is well known that the double-checked locking idiom is incorrect under many relaxed memory-consistency models [16]; we assume sequential consistency here.
Thread 1 | Thread 2 | Thread 3 | Thread 4
--- | --- | --- | ---
\( s := X \) | atomic {
\( X := 1; \)
\( \text{pure \{ while (Z != 5) skip \};} \)
\( Y := 2 \)
\} | \( t := Y \) | \( Z := 5 \)
**Fig. 5.** Pure-causally atomic, but not reducibly, pure-reducibly or causally atomic
### 2.4 Pure-causal atomicity
In the rest of this paper, we show that combining the advantages of pure-reducible atomicity and causal atomicity yields a system that can validate all the above examples as well as examples no previous work can, under a definition we call **pure-causal atomicity**.
In the same way causal atomicity can validate programs more precisely than reducible atomicity, so pure-causal atomicity can validate programs more precisely than pure-reducible atomicity. The program in Figure 5 highlights these differences. Looking at the first three threads, and ignoring the access to \( Z \) for a moment, we see the same example as in Figure 2, so we know this cannot be pure-reducibly atomic. Since it has a loop that may repeatedly access \( Z \), a shared variable modified in the fourth thread, we know this cannot be causally atomic. Yet this is a realistic scenario: one producer thread (thread 4, above); two consumer threads (threads 1 and 3); and a thread which produces some output, waits for an input, and produces more output (thread 2). Under our system, this code does validate as pure-causally atomic: all iterations through the loop are pure, and hence can be skipped.
### 3 Preliminaries
#### 3.1 The General Approach
We explain our checker for pure-causal atomicity in stages. We present the core language in Section 3.2, and the essential concepts of Petri nets in Section 3.3. Our approach is based heavily on causal atomicity [1], so we begin by explaining that system’s design. Causal atomicity first inductively translates programs from the source language into a Petri net that models the control flow and inter-thread contention over shared variables and locks, and abstracts other details (such as the values of variables). The precise notion of causal atomicity is then expressible as a decidable property of traces over that Petri net. Finally, this property can be directly computed using **colored reachability**, a standard analysis over Petri nets supported by existing tools. We present the translation of programs to Petri nets in Section 4.1, the definition of causal atomicity in Section 4.2, and the coloring rules in Section 4.3.
Our notion of pure-causal atomicity extends each of these three stages. First, we extend the translation function to support **pure** annotations (Section 4.4). We then refine the key property over traces to incorporate the results of the
purity analysis (Section 4.5). Finally, we extend the coloring rules to implement this refined definition (Section 4.6).
### 3.2 The language
Figure 6 defines the syntax of our language. Most of the statement forms (skip, if, etc.) have standard semantics. A loop repeats infinitely, and requires a break statement to abruptly exit by jumping to the end of the nearest enclosing block statement. We define while\(\{e\}\{s\} := \text{block loop}\{\text{if } (e) \text{ then } s \text{ else break}\}\). A program is a fixed number of threads, which we denote by \(T\) to distinguish them from substatements. The language also has two hooks for our analyses: a pure\(\{s\}\) statement indicates \(s\) should be pure, while an atomic\(\{s\}\) statement indicates \(s\) should be atomic. Both annotations have no runtime effect but are verified statically. Purity requires that on normal termination a code block not perform any variable writes or leave any locks modified (i.e., an unmatched acquire or release).
The semantics of break statements is to terminate the current block. As such, using break within a pure block transfers control outside the block; such “abruptly” exiting paths are not checked for purity, thereby permitting pure blocks to cause side-effects on abrupt termination (as in the desugaring of the while loop in Figure 5 above), and are key to purity’s utility.
### 3.3 Petri Nets
A Petri net [17] is a triple \(N = (P, T, F)\), with \(P\) a set of places, \(T\) a set of transitions, and \(F\) a flow relation \(F \subseteq (P \times T) \cup (T \times P)\). For our purposes, transitions model instructions in a program, and places model resources such as variables, locks, or the current program position within each thread.
Places can be marked with tokens, which are drawn from a finite set of colors. The assignment of tokens to places is called a marking. We will restrict ourselves to nets where in all reachable markings each place has at most one token. A transition \(t\) is enabled when all its pre-conditions (all places \(p\) for which an arc...
(p, t) exists) are marked. When enabled transitions fire, they remove the tokens on their pre-conditions and place new, possibly differently-colored, tokens on their post-conditions (all places p with an arc (t, p)). Starting from some initial marking $M_0$, a sequence of transitions $t_1, t_2, \ldots$ is called a firing sequence if transition $t_i$ is enabled in marking $M_{i-1}$ and, after firing, produces marking $M_i$; each marking $M_i$ is reachable from $M_0$. For our purposes, as transitions correspond to instructions in the program, firing sequences correspond to execution schedules of the program. Moreover, a marking summarizes the state of the program: the current program positions for each thread, the sets of locks currently held, and various state properties on variables. An example Petri net constructed by our analysis is drawn in Figure 8 on page 9; not shown is the marking, which includes the variable places and one program counter at some point in each thread.
The neighborhood of a transition is defined as the union of its pre- and post-conditions. If the neighborhoods of two transitions $t_1, t_2$ are disjoint, the transitions are said to be independent, denoted $t_1 \perp t_2$. Intuitively, if $t_1 \perp t_2$, the firing of one transition cannot immediately influence the firing of the other, and once both transitions have fired consecutively, the marking of the net is guaranteed to be the same regardless of the firing order. All transitions that are not independent are called dependent, denoted $t_1 \perp t_2$. For our purposes, as places correspond to variables, locks and program positions, dependencies between transitions correspond to control- and data-dependencies in the program. It is this notion of dependence that gives rise to the definition of causal atomicity. Given a firing sequence, the dependence relation captures exactly which scheduling interactions between threads matter, and which are artifacts of the current schedule.
To abstract away from firing sequences and capture the dependence notion directly, define a trace of a Petri net as a triple $(E, \preceq, \lambda)$, where $E$ is a set of events corresponding to the firing of transitions, $\lambda : E \rightarrow T$ labels an event with the transition that fired, and $\preceq$ is a partial order on events that respects the dependence relation. Specifically, if $\lambda(e_1) \perp \lambda(e_2)$, then $e_1 \preceq e_2 \vee e_2 \preceq e_1$, and if $e_1 \prec e_2$, then $\lambda(e_1) \perp \lambda(e_2)$, where $e_1 \prec e_2 \overset{\text{def}}{=} e_1 \preceq e_2 \wedge \nexists e. e_1 \prec e \prec e_2$. These definitions imply that firing sequences are linearizations of traces; they are one particular ordering that respects dependencies. Furthermore, all firing sequences corresponding to the same trace lead to the same marking (i.e., program state).
4 Atomicity checking via Petri nets
4.1 Causal atomicity: Building the net
Figure 7 shows part of the translation function TRANS(s) from program statements into Petri nets. Circles represent places, boxes represent transitions, and arrows represent the flow relation (ignore for now the dashed lines). “Circle-boxes” represent subnets of the Petri net generated by recursive calls to TRANS, and depict a key structural property of the translation: any statement or expression yields a subnet beginning with a unique place $p_{in}$ and ending with some number of arcs out of some number of transitions; this structure is inductively maintained by the translation. Our translation is value-insensitive. For
Fig. 7. The TRANS function for basic constructs in our language. Dashed lines and
the \textit{l}-\textit{held} and \textit{l}-\textit{other} places will be used later.
example, \textsc{TRANS}(\textbf{if} (e) \textbf{then} S \textbf{else} S') first translates the guard expression,
\textsc{TRANS}(e). The two transitions \(e = T\) and \(e = F\) are thereby enabled, and precisely one of them, chosen nondeterministically, fires, at which point the token is
passed into \textsc{TRANS}(S) or \textsc{TRANS}(S'). The outgoing arcs of the \textbf{if} statement
are the union of the outgoing arcs of both branches.
While translating statements encodes the control flow, we must also encode
all resources—variables and locks—that the program uses. For each variable
we construct an array of variable places \(v_i\) for each of the \(n\) threads \(1 \leq i \leq n\),
each marked with a single token. Crucially, reading a variable \(v\) in thread \(i\)
depends only on the place \(v_i\), while writing to \(v\) depends on every place \(v_1, \ldots, v_n\).
This ensures all writes are causally related and read-write conflicts are faithfully
reproduced, while multiple read events are causally independent. To model lock
operations, a single place \(l_{\text{open}}\) is produced for every lock \(l\), marked with a single
token. Acquiring a lock removes the token, while releasing a lock replaces it. The
natural behavior of the net therefore models the mutual exclusion of locks—see
the \textsc{TRANS} case for lock acquisition. Later we will revise this encoding slightly
to support the needs of purity checking.
The complete translation for a given program \(P = T_1 || \cdots || T_n\), then, trans-
lates each thread, the variable places and the lock places as above, and adds an
\textbf{ERROR} place to be described below. We mark every variable place, every lock
place, and the entry point of each thread to initialize the net. An example of
this translation, showing the (unmarked) net corresponding to the program in
Figure 2, is shown in Figure 8.
4.2 Defining causal atomicity
The essence of atomicity is that all instructions in an atomic block must appear to happen indivisibly to all other threads. As such, given a trace with two events in an atomic block in some thread $T$ (i.e., events whose labels are transitions in $\text{TRANS}(T)$), no event in some second thread $T'$ (i.e., events labeled by transitions in $\text{TRANS}(T')$) should be causally "between" the two. Such events would show a data dependence (or antidependence) flowing out of and back into the supposedly atomic block, violating its atomicity.
Formally, let $e_T$ denote that event $e$ occurs in thread $T$, and let $S$ be the subnet from translating some atomic block in a program $P$. Then $S$ is causally atomic\(^2\) if there does not exist a trace where
$$\exists e_1^T, e_2^T, e_3 \in E\cdot e_1 \in \text{START}(S) \land e_1 \preceq e_2 \preceq e_3 \land \exists e_T \in \text{END}(S)\cdot e_1 \preceq e \preceq e_3$$
where $\text{START}(S)$ and $\text{END}(S)$ are the sets of events labeled by the first and last transitions that can possibly fire in $S$, and $T$ and $T'$ are distinct threads.
4.3 Computing causal atomicity: coloring the net
Rather than check for causal atomicity violations by generating and examining an infinite number of traces (which is not an algorithm), we examine the state space of possible markings of the net. Specifically, we can use colored tokens to encode our definition to see if causality flows across threads in an unacceptable way, and query whether a particular (bad) coloring is possible. We simply give each mark a color drawn from the set \{A, B, Y, R\}, and change colors as transitions fire:
- Initially, all tokens are colored A (achromatic).
- On entry to an atomic block in thread $T$, the mark may be turned B (blue). This corresponds to event $e_1^T$ above, and means we guess this block may in fact be non-atomic. Any transition in thread $t$ with any B inputs will propagate B to all outputs.
- If a transition in another thread $T'$ has a B input, it will turn all outputs Y (yellow); this corresponds to event $e_2^{T'}$. Any transition not in thread $T$ with any Y inputs will propagate Y to all outputs.
- Finally, if a transition in thread $T$ sees a Y input, it turns the mark R (red); this is the final event $e_3^T$, denoting an atomicity violation. If the end of the atomic block is reached (event $e_T^T$) and the color is not R, this trace does not show an atomicity violation (we guessed wrong turning the token B).
\(^2\) This is a slightly different formulation than the definition in [1], but we prove the two equivalent in our companion technical report [14].
If there is no trace where the token is turned $B$ at the start of some atomic block and turns $R$ by the end, then the program is causally atomic. This algorithm will always terminate, as our state space—the set of reachable markings—is finite (recall we constructed our net such that markings are simply subsets of the net’s places), and there exist efficient algorithms to answer these reachability queries lazily, without computing the entire state space.
To see these rules in action, consider how they might apply to the net in Figure 8. Initially all marks start uncolored; when the $\text{begin}$ transition fires, the mark in thread 2 turns blue. When thread 2 executes $X := 1$, the place $x_1$ is marked blue as well. Suppose that thread 1 now starts and fires $X$; its token (and that of $x_1$) would turn yellow. However, there is no other transition in thread 2 that will access $x_1$, so there is no way for the yellow token to turn red. Similar arguments can be made for other execution orders.
### 4.4 Pure-causal atomicity: Enhancements for purity
The translation described so far checks causal atomicity, but contains no way to check for or exploit purity in the program. We now show how to encode a purity analysis as a query over Petri nets, using its results to improve the atomicity queries, and moreover querying both purity and atomicity with the same net.
Figure 9 depicts the rest of the TRANS function. First, for each lock $l$, acquire and release operations now not only access a global place $l_{\text{open}}$, but also shuttle a mark between two thread-local places, $l_i$-held (only marked when lock $l$ is held by thread $i$) and $l_i$-other (marked otherwise). These two places permit a thread to examine the set of locks it currently holds, without any causal dependencies on other threads (i.e., without accessing any “thread-shared” places); this property will be crucial for purity checking. Next, block statements define a target for the non-local jump behavior of break statements; this is depicted in our diagrams by the horizontal dashed arrow of break (and those of all inductive calls to TRANS). All blocks need to provide is a place for those arcs to target; this place then outputs via a normally-terminating arc.
Almost all the increased complexity comes from the translation of pure statements. We need to ensure that:
- On any control-flow path that reaches the end of the pure block, the thread does no writes, any locks released are (re)acquired, and any locks acquired are released. Otherwise a purity violation is reported.
- On any control-flow path that reaches a break statement before the end of the pure block, any causal-atomicity checking (for an atomic block in this or any thread) is performed as usual, possibly reporting an atomicity violation.
- On any control-flow path that reaches the end of the pure block, the pure block’s actions must not lead to an atomicity violation. These are exactly the false positives we avoid via purity annotations.
To encode these three issues, entry to TRANS(pure $s$) makes a nondeterministic 3-way choice. If the purity transition fires, the subsequent execution of the
block will fail if the block is not pure (succeeding “vacuously” if a break statement is reached) and no atomicity checking is done. If the atomicity transition fires, the subsequent execution of the block will do normal atomicity checking (succeeding “vacuously” if the end of the block is reached without executing a break statement). If the skip transition fires, control transfers immediately to the end of the pure block.
The skip option is what allows the purity option to maintain no information with regard to atomicity checking. Since the purity option ensures no path to the end of the pure block can affect or be affected by another thread, no transitions that occur along such a path can themselves lead to a violation of causal atomicity. However, we cannot completely ignore control paths that happen to include pure blocks that do not execute break statements. For example, code of the form x := 1; pure{...}; y := x might lead to an atomicity violation; the skip option covers such cases.
---
Fig. 9. The rest of the TRANS construction to implement purity checking of locksets and variable accesses. Each lock place $L$ and $L_{i\text{-other}}$ starts off marked.
Let us now focus on how we actually check for purity when the purity transition is fired. To handle variable mutations, we initialize the color of the token to a “known good” state, and update it to record a “potential problem” once a write occurs. On exiting a pure block normally, only the known-good color can continue, while the other leads to ERROR. If the block breaks, however, this color check is bypassed, matching our definition that abrupt termination can mutate state. More formally, the color checks ensure that all mutations are post-dominated by a break statement.
To handle lock manipulations, we take a “snapshot” of the set of held locks just before executing the body of the pure block, and “check” that the same locks are held on normal termination. Two points are crucial to the correctness of these constructions: first, we are guaranteed that precisely one token is present on either l_i-held or l_i-other, and therefore precisely one transition is enabled in the snapshot construction. The snapshot maintains this property for its two output places, old-l_i-held and old-l_i-other, and therefore precisely one of the four transitions is enabled in the check construction. Second, when a pure block is contained within a loop (for instance, loop {block {pure {break}}}), we must ensure break statements clean up the snapshots’ tokens before exiting the pure block; this explains the rest of the construction for break in the diagram.
4.5 Defining pure-causal atomicity
As described above, four potential kinds of traces are possible: a trace may fire purity and so check the body for purity; a trace may fire atomicity and exit abruptly, and so check the program for causal atomicity; a trace may fire skip and model the normal termination of the body, and so check the program for causal atomicity; or a trace may fire atomicity and exit normally, in which case the results are unneeded (any real errors will be found either by purity checking or by abrupt termination; the remainder are false positives).
To formalize pure-causal atomicity, we again want no event to come “between” two events in an atomic block, but now we need to reason solely about the proper kinds of traces. Therefore, define a non-pure trace as one where
\[ \forall e \in \text{TRANS}(\text{pure } s). \lambda(e) = \text{skip} \vee \lambda(e) = \text{atomicity} \wedge \exists e' \in \text{CURRENT}(\text{pure } s, e). \lambda(e') = t_{\text{checkPurity}} \]
where CURRENT(pure s, e) is the set of events in one contiguous execution TRANS(pure s) that contains the event e. This definition selects the non-vacuous traces, exactly those which fire the atomicity or skip transitions and, when executing the body of a pure block, execute a break statement.
Footnote:
3 There is a subtlety involving infinite traces, which may run forever without terminating or breaking (for example, pure { X := 42; loop {skip} }), which is not pure but will neither terminate nor break. To soundly reject such programs, we pragmatically require that some final transition always be reachable from every place in a pure block, which still permits all purity idioms we have so far encountered.
Only in these traces will our conclusions about pure-causal atomicity be valid. Formally, let $P$ be a program, and let $S$ be the subnet from the translation of some atomic block in $P$. Then $S$ is pure-causally atomic if there does not exist a non-pure trace where
$$\exists e_1^T, e_2^T, e_3^T \in E . e_1 \in \text{START}(S) \land e_1 \preceq e_2 \preceq e_3 \land \nexists e^T \in \text{END}(S). e_1 \preceq e \preceq e_3$$
Note that the definition of purity used by pure-causal atomicity is conservative: writing $c$ to $x$ when $x$ already holds $c$ changes no state but is considered impure (recall, our translation does not model the values of variables); additionally, breaking unnecessarily after a pure but atomic action may lead to atomicity violations. A more robust notion of purity would only change the translation slightly, and not change the definition of pure-causal atomicity.
### 4.6 Computing pure-causal atomicity: Coloring the net
To extend our analysis to support purity, tokens in our net actually have a color drawn from the set $\{A, B, Y, R\} \times \{n, o, b, m\}$. The first component tracks causal atomicity and is propagated using the rules described above. The second component tracks purity and its integration with pure-causal atomicity. Initially, all tokens are colored $n$ (not in pure). On entry into a pure block, one of three transitions must be taken (all rules leave the first component unchanged):
- If $\text{skip}$ fires at the start of the block, the purity color is left at $n$.
- If $\text{atomicity}$ fires, the purity color is changed to $m$ (must break), indicating this must be a non-pure trace. Since we are checking atomicity, we permit all the color changes described in Section 4.3. If there is a violation of causal atomicity, at least one variable or lock place must be colored $R$; we therefore add a transition from every variable or lock place to $\text{ERROR}$, coloring it $R$ when the input place turns $R$. If we get to the $\text{check-purity}$ transition, the trace succeeds vacuously, as it was not properly a non-pure trace (in the sense above), and no conclusions can be drawn. To handle this possible non-termination of the trace, we disable the transitions to $\text{ERROR}$ until all pure blocks have terminated, at which time we are correct to report the violation.
- Finally, if $\text{purity}$ fires at the start of the block, we turn the mark $o$ (okay).
This trace checks for purity, so we disable the atomicity coloring rules—the only way this trace can reach $\text{ERROR}$ is if the block is not causally pure. The color $o$ remains unchanged until a variable write, at which point it turns $b$ (bad). When the block terminates normally, the checkpoint confirms that the locks have not been changed, and turns $\text{ERROR}$ $R$ otherwise. When $\text{check-purity}$ is reached, a $b$ token turns $R$ and flows to $\text{ERROR}$, while an $o$ token causes the trace to abort (purity checking succeeded on this trace, but all further atomicity checking is invalid since we ignored potential causal dependencies).
Similar to our informal justification for the causal-atomicity coloring rules above, the purity coloring rules for purity $n$ and $m$ pick out precisely the non-pure traces such that the atomicity colors again correspond to these events; additionally the coloring rules for $o$ and $b$ identify any purity violations.
5 Implementation
We built a prototype checker for the algorithm presented above, using CPN-Tools [12, 18]. Our toolchain accepts a program in a concrete syntax resembling Figure 6 and compiles it to a set of Petri nets, each designed to test a single atomic block of the source program. These nets are each read by CPN-Tools, which in turn constructs the state-space model. Each net contains the pure-causal atomicity reachability query, written in Standard ML using the library functions exported by the tool. If any of the nets satisfy the query, i.e., ERROR is colorable with a red token, then the source program is not pure-causally atomic. If all nets fail the query, the source program is pure-causally atomic.
Our implementation verifies all the examples in this paper as atomic (except the first, which is correctly rejected as not atomic), and correctly rejects variants of the examples that break atomicity. As in [1], we leave the extending of our approach to full-scale languages to future work, but we see no new fundamental problems. The toolchain and examples are available from our website [14, 15].
6 Expressiveness of pure-causal atomicity
The preceding sections have defined our pure-causal atomicity analysis, and explained how we compute the results for a given program. We showed in Section 2.4 that pure-causal atomicity can check examples no prior system could. The following two results show our translation to Petri nets also does not deem any programs non-atomic that the type-system approach validates. We state the formal results here, and sketch the essential ideas of the proofs. Properly formalizing these results requires a full definition of the type systems for purity and (pure-)reducible atomicity (see the technical report [14] for full details):
**Theorem 1.** For every reducibly-atomic program $P$ that has no pure blocks, all atomic blocks in $P$ are causally atomic when translated into Petri nets.
**Sketch of Proof.** By structural induction on each statement $s$ in $P$; we strengthen the induction to show that if $s$ type-checks as reducibly atomic (recall the type system ascribes a mover to every statement, not just atomic blocks) and all substatements of $s$ are causally atomic, then $s$ itself is causally atomic. We show the most interesting case here, of sequencing two statements.
By construction, every transition in $\text{TRANS}(s)$ has a neighborhood consisting solely of places in $s$’s thread, variable places (if it is a variable access) or lock places (if it is a lock operation). Therefore, causal dependencies between threads can only occur through actual contention over shared resources.
Consider the sequence $s_1; s_2$ appearing in the source program. Assume inductively that both statements type-check, that their individual translations are causally atomic, and that $s_1; s_2$ type-checks as atomic. We proceed by contradiction, assuming that $s_1; s_2$ is not causally atomic when translated into a Petri net. It follows that if the $\text{TRANS}(s_1; s_2)$ is to not be causally atomic, then there must be some events $e_1^T \in \text{TRANS}(s_1)$, $e_3^T \in \text{TRANS}(s_2)$, and some other event $e_2''$ such that $e_1 \preceq e_2' \preceq e_3$. Pick $e_1$ to be the last event in $\text{TRANS}(s_1)$ to still
be causally before $e_2$, and pick $e_3$ to be the earliest event in $\text{TRANS}(s_2)$ after $e_2$. (We know $e_1$ and $e_3$ cannot both be in the same substatement, or else that substatement would not be causally atomic, contradicting our earlier assumption.) By the above paragraph, if all three events are causally dependent and in different threads, then they must access locks or variables. Suppose all three events access the same lock (the variable case is similar). Then we know $e_1$ must be an acquire, and the lock must be held at least until $e_2$. If so, then $e_2$ cannot happen between the two events, because no other thread can manipulate a lock while it is held. We therefore have a contradiction: no such event $e_2$ can exist.
**Theorem 2.** For every pure-reducibly atomic program $P$, all pure blocks in $P$ are actually pure and all atomic blocks in $P$ are pure-causally atomic when translated into Petri nets.
**Sketch of Proof.** The proof is very similar to that of the previous theorem, by structural induction on statements $s$ in $P$; we strengthen the induction to show that if (1) $s$ type-checks as pure-reducibly atomic, (2) all pure blocks in $s$ are indeed pure (proven separately; see [14]), and (3) all substatements of $s$ are pure-causally atomic, then $s$ itself is pure-causally atomic. We sketch the key new case handling pure blocks.
Assuming that a pure block $s = \text{pure} \ {s'}$ is pure-reducibly atomic, suppose that $\text{TRANS}(s)$ is not pure-causally atomic; we show this leads to a contradiction. Looking at the typing rules for pure-reducible atomicity, we are guaranteed by (1) that $s'$ itself is pure-reducibly atomic; by (2) we know that $s'$ is indeed pure and, using (3), by induction we therefore know $\text{TRANS}(s')$ is pure-causally atomic, i.e., there does not exist a non-pure trace where events $e_1$ and $e_3$ in $\text{TRANS}(s')$ satisfy the condition in pure-causal atomicity. Therefore, if there is a violation of the pure-causal atomicity of $\text{TRANS}(s)$, at least one of $e_1$ and $e_3$ must correspond to transitions not in $s'$.
Examining the construction of $\text{TRANS}(\text{pure} \ s')$, we see that the only transitions not in $\text{TRANS}(s')$ are part of the constructions we introduced to check purity. Crucially, each of these transitions is causally dependent only on other transitions in the current thread (this is why we needed the $l_i$-held and $l_i$-other places to be thread-local—the snapshot and checkpoint constructions can access them without any dependence on other threads); said another way, they are independent of all other threads. As a consequence, no event corresponding to these transitions can create a causal dependence with other threads, contradicting our assumption that $\text{TRANS}(s)$ is not pure-causally atomic.
**7 Conclusion**
We have defined a new notion of pure-causal atomicity, and shown that it extends both causal atomicity and pure-reducible atomicity in expressive power. It uses a non-trivial encoding of purity checking in Petri nets, and shows how to incorporate purity checking nearly orthogonally to atomicity checking. We formally show that causal atomicity is a conservative extension of reducible atomicity, and that pure-causal atomicity likewise extends pure-reducible atomicity.
References
|
{"Source-Url": "https://homes.cs.washington.edu/~djg/papers/pure_causal_atomicity.pdf", "len_cl100k_base": 9853, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 50433, "total-output-tokens": 11575, "length": "2e13", "weborganizer": {"__label__adult": 0.0003757476806640625, "__label__art_design": 0.0002841949462890625, "__label__crime_law": 0.0003650188446044922, "__label__education_jobs": 0.0003845691680908203, "__label__entertainment": 5.650520324707031e-05, "__label__fashion_beauty": 0.0001569986343383789, "__label__finance_business": 0.00014650821685791016, "__label__food_dining": 0.0004062652587890625, "__label__games": 0.0005497932434082031, "__label__hardware": 0.0009050369262695312, "__label__health": 0.0006208419799804688, "__label__history": 0.00023293495178222656, "__label__home_hobbies": 0.00010138750076293944, "__label__industrial": 0.0003905296325683594, "__label__literature": 0.0002624988555908203, "__label__politics": 0.0002963542938232422, "__label__religion": 0.0005655288696289062, "__label__science_tech": 0.01412200927734375, "__label__social_life": 8.398294448852539e-05, "__label__software": 0.00354766845703125, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.000415802001953125, "__label__transportation": 0.0006761550903320312, "__label__travel": 0.0002315044403076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45381, 0.0178]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45381, 0.36566]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45381, 0.88885]], "google_gemma-3-12b-it_contains_pii": [[0, 2780, false], [2780, 6062, null], [6062, 8747, null], [8747, 11780, null], [11780, 14495, null], [14495, 16580, null], [16580, 20174, null], [20174, 22225, null], [22225, 24901, null], [24901, 28080, null], [28080, 29263, null], [29263, 32443, null], [32443, 35865, null], [35865, 39164, null], [39164, 42503, null], [42503, 45381, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2780, true], [2780, 6062, null], [6062, 8747, null], [8747, 11780, null], [11780, 14495, null], [14495, 16580, null], [16580, 20174, null], [20174, 22225, null], [22225, 24901, null], [24901, 28080, null], [28080, 29263, null], [29263, 32443, null], [32443, 35865, null], [35865, 39164, null], [39164, 42503, null], [42503, 45381, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45381, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45381, null]], "pdf_page_numbers": [[0, 2780, 1], [2780, 6062, 2], [6062, 8747, 3], [8747, 11780, 4], [11780, 14495, 5], [14495, 16580, 6], [16580, 20174, 7], [20174, 22225, 8], [22225, 24901, 9], [24901, 28080, 10], [28080, 29263, 11], [29263, 32443, 12], [32443, 35865, 13], [35865, 39164, 14], [39164, 42503, 15], [42503, 45381, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45381, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
9e8213115d6c1ee7fe9cd1b3843b3afb4bd86abb
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01737737/file/submission.pdf", "len_cl100k_base": 13354, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 64250, "total-output-tokens": 16235, "length": "2e13", "weborganizer": {"__label__adult": 0.0004851818084716797, "__label__art_design": 0.00055694580078125, "__label__crime_law": 0.0006937980651855469, "__label__education_jobs": 0.00156402587890625, "__label__entertainment": 0.00012099742889404296, "__label__fashion_beauty": 0.0002551078796386719, "__label__finance_business": 0.0005211830139160156, "__label__food_dining": 0.0006008148193359375, "__label__games": 0.001190185546875, "__label__hardware": 0.001590728759765625, "__label__health": 0.0015888214111328125, "__label__history": 0.000560760498046875, "__label__home_hobbies": 0.00021529197692871096, "__label__industrial": 0.00104522705078125, "__label__literature": 0.0003676414489746094, "__label__politics": 0.0005254745483398438, "__label__religion": 0.0008177757263183594, "__label__science_tech": 0.36865234375, "__label__social_life": 0.00015616416931152344, "__label__software": 0.0090484619140625, "__label__software_dev": 0.607421875, "__label__sports_fitness": 0.0005145072937011719, "__label__transportation": 0.0011129379272460938, "__label__travel": 0.00027251243591308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52343, 0.05231]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52343, 0.35768]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52343, 0.84791]], "google_gemma-3-12b-it_contains_pii": [[0, 1068, false], [1068, 3585, null], [3585, 6909, null], [6909, 10042, null], [10042, 12961, null], [12961, 15721, null], [15721, 18965, null], [18965, 22285, null], [22285, 24632, null], [24632, 26981, null], [26981, 29862, null], [29862, 32307, null], [32307, 35178, null], [35178, 38770, null], [38770, 40059, null], [40059, 43218, null], [43218, 46107, null], [46107, 49471, null], [49471, 52343, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1068, true], [1068, 3585, null], [3585, 6909, null], [6909, 10042, null], [10042, 12961, null], [12961, 15721, null], [15721, 18965, null], [18965, 22285, null], [22285, 24632, null], [24632, 26981, null], [26981, 29862, null], [29862, 32307, null], [32307, 35178, null], [35178, 38770, null], [38770, 40059, null], [40059, 43218, null], [43218, 46107, null], [46107, 49471, null], [49471, 52343, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52343, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52343, null]], "pdf_page_numbers": [[0, 1068, 1], [1068, 3585, 2], [3585, 6909, 3], [6909, 10042, 4], [10042, 12961, 5], [12961, 15721, 6], [15721, 18965, 7], [18965, 22285, 8], [22285, 24632, 9], [24632, 26981, 10], [26981, 29862, 11], [29862, 32307, 12], [32307, 35178, 13], [35178, 38770, 14], [38770, 40059, 15], [40059, 43218, 16], [43218, 46107, 17], [46107, 49471, 18], [49471, 52343, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52343, 0.10667]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
b980d15068a2aa06dc0701441729d8d73254148a
|
Constant Partyng: Growing and Handling Trees
with Constant Fits
Torsten Hothorn
Universität Zürich
Achim Zeileis
Universität Innsbruck
Abstract
This vignette describes infrastructure for regression and classification trees with simple constant fits in each of the terminal nodes. Thus, all observations that are predicted to be in the same terminal node also receive the same prediction, e.g., a mean for numeric responses or proportions for categorical responses. This class of trees is very common and includes all traditional tree variants (AID, CHAID, CART, C4.5, FACT, QUEST) and also more recent approaches like CTree. Trees inferred by any of these algorithms could in principle be represented by objects of class "constparty" in partykit that then provides unified methods for printing, plotting, and predicting. Here, we describe how one can create "constparty" objects by (a) coercion from other R classes, (b) parsing of XML descriptions of trees learned in other software systems, (c) learning a tree using one’s own algorithm.
Keywords: recursive partitioning, regression trees, classification trees, decision trees.
1. Classes and methods
This vignette describes the handling of trees with constant fits in the terminal nodes. This class of regression models includes most classical tree algorithms like AID (Morgan and Sonquist 1963), CHAID (Kass 1980), CART (Breiman, Friedman, Olshen, and Stone 1984), FACT (Loh and Vanichsetakul 1988), QUEST (Loh and Shih 1997), C4.5 (Quinlan 1993), CTree (Hothorn, Hornik, and Zeileis 2006) etc. In this class of tree models, one can compute simple predictions for new observations, such as the conditional mean in a regression setup, from the responses of those learning sample observations in the same terminal node. Therefore, such predictions can easily be computed if the following pieces of information are available: the observed responses in the learning sample, the terminal node IDs assigned to the observations, and potentially associated weights (if any).
In partykit it is easy to create a “party” object that contains these pieces of information, yielding a “constparty” object. The technical details of the “party” class are discussed in detail in Section 3.4 of vignette("partykit", package = "partykit"). In addition to the elements required for any “party”, a “constparty” needs to have: variables (fitted) and (response) (and (weights) if applicable) in the fitted data frame along with the terms for the model. If such a “party” has been created, its properties can be checked and coerced to class “constparty” by the as.constparty() function.
Note that with such a “constparty” object it is possible to compute all kinds of predictions from the subsample in a given terminal node. For example, instead the mean response the
median (or any other quantile) could be employed. Similarly, for a categorical response the predicted probabilities (i.e., relative frequencies) can be computed or the corresponding mode or a ranking of the levels etc.
In case the full response from the learning sample is not available but only the constant fit from each terminal node, then a “constparty” cannot be set up. Specifically, this is the case for trees saved in the XML format PMML (Predictive Model Markup Language, Data Mining Group 2014) that does not provide the full learning sample. To also support such constant-fit trees based on simpler information partykit provides the “simpleparty” class. Inspired by the PMML format, this requires that the info of every node in the tree provides list elements prediction, n, error, and distribution. For classification trees these should contain the following node-specific information: the predicted single predicted factor, the learning sample size, the misclassification error (in %), and the absolute frequencies of all levels. For regression trees the contents should be: the predicted mean, the learning sample size, the error sum of squares, and NULL. The function as.simpleparty() can also coerce “constparty” trees to “simpleparty” trees by computing the above summary statistics from the full response associated with each node of the tree.
The remainder of this vignette consists of the following parts: In Section 2 we assume that the trees were fitted using some other software (either within or outside of R) and we describe how these models can be coerced to “party” objects using either the “constparty” or “simpleparty” class. Emphasize is given to displaying such trees in textual and graphical ways. Subsequently, in Section 3, we show a simple classification tree algorithm can be easily implemented using the partykit tools, yielding a “constparty” object. Section 4 shows how to compute predictions in both scenarios before Section 5 finally gives a brief conclusion.
2. Coercing tree objects
For the illustrations, we use the Titanic data set from package datasets, consisting of four variables on each of the 2201 Titanic passengers: gender (male, female), age (child, adult), and class (1st, 2nd, 3rd, or crew) set up as follows:
```r
> data("Titanic", package = "datasets")
> ttnc <- as.data.frame(Titanic)
> ttnc <- ttnc[, rep(1:nrow(ttnc), ttnc$Freq), 1:4]
> names(ttnc)[2] <- "Gender"
```
The response variable describes whether or not the passenger survived the sinking of the ship.
2.1. Coercing rpart objects
We first fit a classification tree by means of the the rpart() function from package rpart (Therneau and Atkinson 1997) to this data set:
```r
> library("rpart")
> (rp <- rpart(Survived ~ ., data = ttnc))
```
n= 2201
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 2201 711 No (0.6769650 0.3230350)
2) Gender=Male 1731 367 No (0.7879838 0.2120162)
4) Age=Adult 1667 338 No (0.7972406 0.2027594) *
5) Age=Child 64 29 No (0.7291667 0.2708333) *
10) Class=3rd 48 13 No (0.7291667 0.2708333) *
11) Class=1st,2nd 16 0 Yes (0.0000000 1.0000000) *
3) Gender=Female 470 126 Yes (0.2680851 0.7319149)
6) Class=3rd 196 90 No (0.5408163 0.4591837) *
7) Class=1st,2nd,Crew 274 20 Yes (0.0729927 0.9270073) *
The “rpart” object rp can be coerced to a “constparty” by as.party(). Internally, this transforms the tree structure of the “rpart” tree to a “partynode” and combines it with the associated learning sample as described in Section 1. All of this is done automatically by
> (party_rp <- as.party(rp))
Model formula:
Survived ~ Class + Gender + Age
Fitted party:
[1] root
| [2] Gender in Male
| | [3] Age in Adult: No (n = 1667, err = 20.3%)
| | [4] Age in Child
| | | [5] Class in 3rd: No (n = 48, err = 27.1%)
| | | [6] Class in 1st, 2nd: Yes (n = 16, err = 0.0%)
| [7] Gender in Female
| | [8] Class in 3rd: No (n = 196, err = 45.9%)
| | [9] Class in 1st, 2nd, Crew: Yes (n = 274, err = 7.3%)
Number of inner nodes: 4
Number of terminal nodes: 5
Now, instead of the print method for “rpart” objects the print method for constparty objects creates a textual display of the tree structure. In a similar way, the corresponding plot() method produces a graphical representation of this tree, see Figure 1.
By default, the predict() method for “rpart” objects computes conditional class probabilities. The same numbers are returned by the predict() method for constparty objects with type = "prob" argument (see Section 4 for more details):
> all.equal(predict(rp), predict(party_rp, type = "prob"),
+ check.attributes = FALSE)
[1] TRUE
> plot(rp)
> text(rp)
> plot(party_rp)
Figure 1: “rpart” tree of Titanic data plotted using rpart (top) and partykit (bottom) infrastructure.
Predictions are computed based on the `fitted` slot of a “constparty” object
```r
> str(fitted(party_rp))
'data.frame': 2201 obs. of 2 variables:
$ (fitted) : int 5 5 5 5 5 5 5 5 5 5 ...
$ (response): Factor w/ 2 levels "No","Yes": 1 1 1 1 1 1 1 1 1 1 ...
```
which contains the terminal node numbers and the response for each of the training samples. So, the conditional class probabilities for each terminal node can be computed via
```r
> prop.table(do.call("table", fitted(party_rp)), 1)
(response) (fitted) No Yes
3 0.7972406 0.2027594
5 0.7291667 0.2708333
6 0.0000000 1.0000000
8 0.5408163 0.4591837
9 0.0729927 0.9270073
```
Optionally, weights can be stored in the `fitted` slot as well.
### 2.2. Coercing J48 objects
The `RWeka` package (Hornik, Buchta, and Zeileis 2009) provides an interface to the Weka machine learning library and we can use the `J48()` function to fit a J4.8 tree to the Titanic data
```r
> library("RWeka")
> (j48 <- J48(Survived ~ ., data = ttnc))
```
**J48 pruned tree**
```
Gender = Male
| Class = 1st
| | Age = Child: Yes (5.0)
| | Age = Adult: No (175.0/57.0)
| Class = 2nd
| | Age = Child: Yes (11.0)
| | Age = Adult: No (168.0/14.0)
| Class = 3rd: No (510.0/88.0)
| Class = Crew: No (862.0/192.0)
Gender = Female
| Class = 1st: Yes (145.0/4.0)
| Class = 2nd: Yes (106.0/13.0)
```
| Class = 3rd: No (196.0/90.0) |
| Class = Crew: Yes (23.0/3.0) |
Number of Leaves : 10
Size of the tree : 15
This object can be coerced to a “party” object using
```r
> (party_j48 <- as.party(j48))
```
Model formula:
Survived ~ Class + Gender + Age
Fitted party:
```r
[1] root
| [2] Gender in Male
| | [3] Class in 1st
| | | [4] Age in Child: Yes (n = 5, err = 0.0%)
| | | [5] Age in Adult: No (n = 175, err = 32.6%)
| | [6] Class in 2nd
| | | [7] Age in Child: Yes (n = 11, err = 0.0%)
| | | [8] Age in Adult: No (n = 168, err = 8.3%)
| | [9] Class in 3rd: No (n = 510, err = 17.3%)
| | [10] Class in Crew: No (n = 862, err = 22.3%)
| [11] Gender in Female
| | [12] Class in 1st: Yes (n = 145, err = 2.8%)
| | [13] Class in 2nd: Yes (n = 106, err = 12.3%)
| | [14] Class in 3rd: No (n = 196, err = 45.9%)
| | [15] Class in Crew: Yes (n = 23, err = 13.0%)
```
Number of inner nodes: 5
Number of terminal nodes: 10
and, again, the print method from the `partykit` package creates a textual display. Note that, unlike the “rpart” trees, this tree includes multiway splits. The `plot()` method draws this tree, see Figure 2.
The conditional class probabilities computed by the `predict()` methods implemented in packages `RWeka` and `partykit` are equivalent:
```r
> all.equal(predict(j48, type = "prob"), predict(party_j48, type = "prob"),
> check.attributes = FALSE)
```
```r
[1] TRUE
```
In addition to `J48()` `RWeka` provides several other tree learners, e.g., `M5P()` implementing M5’ and `LMT()` implementing logistic model trees, respectively. These can also be coerced
> plot(party_j48)
Figure 2: “J48” tree of Titanic data plotted using partykit infrastructure.
using `as.party()`. However, as these are not constant-fit trees this yields plain “party” trees with some character information stored in the info slot.
### 2.3. Importing trees from PMML files
The previous two examples showed how trees learned by other R packages can be handled in a unified way using partykit. Additionally, partykit can also be used to import trees from any other software package that supports the PMML (Predictive Model Markup Language) format.
As an example, we used SPSS to fit a QUEST tree to the Titanic data and exported this from SPSS in PMML format. This file is shipped along with the partykit package and we can read it as follows:
```r
> ttnc_pmml <- file.path(system.file("pmml", package = "partykit"),
+ "ttnc.pmml")
> (ttnc_quest <- pmmlTreeModel(ttnc_pmml))
```
Model formula:
Survived ~ Gender + Class + Age
Fitted party:
```
[1] root
| [2] Gender in Female
| | [3] Class in 3rd, Crew: Yes (n = 219, err = 49.8%)
| | [4] Class in 1st, 2nd
| | | [5] Class in 2nd: Yes (n = 106, err = 12.3%)
| | | [6] Class in 1st: Yes (n = 145, err = 2.8%)
| [7] Gender in Male
| | [8] Class in 3rd, 2nd, Crew
| | | | [9] Age in Child: No (n = 59, err = 40.7%)
| | | | [10] Age in Adult
| | | | | | [12] Class in Crew: No (n = 862, err = 22.3%)
| | | | | | [13] Class in 3rd: No (n = 462, err = 16.2%)
| | | | | [14] Class in 2nd: No (n = 168, err = 8.3%)
| | | | [15] Class in 1st: No (n = 180, err = 34.4%)
```
Number of inner nodes: 7
Number of terminal nodes: 8
The object `ttnc_quest` is of class “simpleparty” and the corresponding graphical display is shown in Figure 3. As explained in Section 1, the full learning data are not part of the PMML description and hence one can only obtain and display the summarized information provided by PMML.
In this particular case, however, we have the learning data available in R because we had exported the data from R to begin with. Hence, for this tree we can augment the “simpleparty”
> plot(ttnc_quest)

Figure 3: QUEST tree for Titanic data, fitted using SPSS and exported via PMML.
with the full learning sample to create a "constparty". As SPSS had reordered some factor levels we need to carry out this reordering as well:
```r
> ttnc2 <- ttnc[, names(ttnc_quest$data)]
> for(n in names(ttnc2)) {
+ if(is.factor(ttnc2[[n]])) ttnc2[[n]] <- factor(ttnc2[[n]], levels = levels(ttnc_quest$data[[n]]))
+ }
```
Using this data all information for a "constparty" can be easily computed:
```r
> ttnc_quest2 <- party(ttnc_quest$node,
+ data = ttnc2,
+ fitted = data.frame(
+ "(fitted)" = predict(ttnc_quest, ttnc2, type = "node"),
+ "(response)" = ttnc2$Survived,
+ check.names = FALSE),
+ terms = terms(Survived ~ ., data = ttnc2)
+ )
> ttnc_quest2 <- as.constparty(ttnc_quest2)
```
This object is plotted in Figure 4.
Furthermore, we briefly point out that there is also the R package `pmml` (Williams, Jena, Hahsler, Zementis Inc., Ishwaran, Kogalur, and Guha 2014), part of the `rattle` project (Williams 2011), that allows to export PMML files for `rpart` trees from R. For example, for the "rpart" tree for the Titanic data:
Figure 4: QUEST tree for Titanic data, fitted using SPSS, exported via PMML, and transformed into a "constparty" object.
> library("pmml")
> tfile <- tempfile()
> write(toString(pmml(rp)), file = tfile)
Then, we can simply read this file and inspect the resulting tree
> (party_pmml <- pmmlTreeModel(tfile))
Model formula:
Survived ~ Class + Gender + Age
Fitted party:
[1] root
| [2] Gender in Male
| | [3] Age in Adult: No (n = 1667, err = 20.3%)
| | [4] Age in Child
| | | [5] Class in 3rd: No (n = 48, err = 27.1%)
| | | [6] Class in 1st, 2nd: Yes (n = 16, err = 0.0%)
| [7] Gender in Female
| | [8] Class in 3rd: No (n = 196, err = 45.9%)
| | [9] Class in 1st, 2nd, Crew: Yes (n = 274, err = 7.3%)
Number of inner nodes: 4
Number of terminal nodes: 5
> all.equal(predict(party_rp, newdata = ttnc, type = "prob"),
+ predict(party_pmml, newdata = ttnc, type = "prob"),
+ check.attributes = FALSE)
[1] TRUE
Further example PMML files created with rattle are the Data Mining Group web page, e.g., http://www.dmg.org/pmml_examples/rattle_pmml_examples/AuditTree.xml or http://www.dmg.org/pmml_examples/rattle_pmml_examples/IrisTree.xml.
3. Growing a simple classification tree
Although the partykit package offers an extensive toolbox for handling trees along with implementations of various tree algorithms, it does not offer unified infrastructure for growing trees. However, once you know how to estimate splits from data, it is fairly straightforward to implement trees. Consider a very simple CHAID-style algorithm (in fact so simple that we would advise not to use it for any real application). We assume that both response and explanatory variables are factors, as for the Titanic data set. First we determine the best explanatory variable by means of a global $\chi^2$ test, i.e., splitting the response into all levels of each explanatory variable. Then, for the selected explanatory variable we search for the binary best split by means of $\chi^2$ tests, i.e., we cycle through all potential split points and assess the quality of the split by comparing the distributions of the response in the so-defined two groups. In both cases, we select the split variable/point with lowest $p$-value from the $\chi^2$ test, however, only if the global test is significant at Bonferroni-corrected level $\alpha = 0.01$.
This strategy can be implemented based on the data (response and explanatory variables) and some case weights as follows (response is just the name of the response and data is a data frame with all variables):
> findsplit <- function(response, data, weights, alpha = 0.01) {
+ # extract response values from data
+ y <- factor(rep(data[[response]], weights))
+
+ # perform chi-squared test of y vs. x
+ mychisqtest <- function(x) {
+ x <- factor(x)
+ if(length(levels(x)) < 2) return(NA)
+ ct <- suppressWarnings(chisq.test(table(y, x), correct = FALSE))
+ pchisq(ct$statistic, ct$parameter, log = TRUE, lower.tail = FALSE)
+ }
+ xselect <- which(names(data) != response)
+ logp <- sapply(xselect, function(i) mychisqtest(rep(data[[i]], weights)))
+
+ # find best splitting point among xselect variables
+ bestsplit <- which.min(logp)
+
+ # build the tree
+ tree <- data.frame(response, xselect[bestsplit], weights)
+
+ # function to split the data
+ split <- function(x, response, data) {
+ if(is.factor(x)) {
+ y <- factor(rep(data[[response]], weights))
+ classes <- levels(x)
+ pvalues <- sapply(classes, function(c) mychisqtest(rep(y, weights[y==c])))
+ bestclass <- classes[which.min(pvalues)]
+ split(data, response, subset(data, x == bestclass))
+ } else {
+ split(data, response, subset(data, x < median(x)))
+ }
+ }
+
+ # build the tree recursively
+ split(tree, response, tree)
+
+ # print the tree
+ print(tree)
+
+ }
```r
+ names(logp) <- names(data)[xselect]
+ ## Bonferroni-adjusted p-value small enough?
+ if(all(is.na(logp))) return(NULL)
+ minp <- exp(min(logp, na.rm = TRUE))
+ minp <- 1 - (1 - minp)^sum(!is.na(logp))
+ if(minp > alpha) return(NULL)
+ ## for selected variable, search for split minimizing p-value
+ xselect <- xselect[which.min(logp)]
+ x <- rep(data[[xselect]], weights)
+ ## set up all possible splits in two kid nodes
+ lev <- levels(x[drop = TRUE])
+ if(length(lev) == 2) {
+ splitpoint <- lev[1]
+ } else {
+ comb <- do.call("c", lapply(1:(length(lev) - 2),
+ function(x) combn(lev, x, simplify = FALSE)))
+ xlogp <- sapply(comb, function(q) mychisqtest(x %in% q))
+ splitpoint <- comb[[which.min(xlogp)]]
+ }
+ ## split into two groups (setting groups that do not occur to NA)
+ splitindex <- !(levels(data[[xselect]]) %in% splitpoint)
+ splitindex[!(levels(data[[xselect]]) %in% lev)] <- NA_integer_
+ splitindex <- splitindex - min(splitindex, na.rm = TRUE) + 1L
+ ## return split as party split object
+ return(partysplit(varid = as.integer(xselect),
+ index = splitindex,
+ info = list(p.value = 1 - (1 - exp(logp))^sum(!is.na(logp))))
+ }
+}
In order to actually grow a tree on data, we have to set up the recursion for growing a recursive
"partynode" structure:
```
+ ## actually split the data
+ kidids <- kidids_split(sp, data = data)
+
+ ## set up all daugther nodes
+ kids <- vector(mode = "list", length = max(kidids, na.rm = TRUE))
+ for (kidid in 1:length(kids)) {
+ ## select observations for current node
+ w <- weights
+ w[kidids != kidid] <- 0
+ ## get next node id
+ if (kidid > 1) {
+ myid <- max(nodeids(kids[[kidid - 1]]))
+ } else {
+ myid <- id
+ }
+ ## start recursion on this daugther node
+ kids[[kidid]] <- growtree(id = as.integer(myid + 1), response, data, w)
+ }
+
+ ## return nodes
+ return(partynode(id = as.integer(id), split = sp, kids = kids,
+ info = list(p.value = min(info_split(sp)$p.value, na.rm = TRUE))))}
A very rough sketch of a formula-based user-interface sets-up the data and calls `growtree()`:
```r
> mytree <- function(formula, data, weights = NULL) {
+
+ ## name of the response variable
+ response <- all.vars(formula)[1]
+ ## data without missing values, response comes last
+ data <- data[complete.cases(data), c(all.vars(formula)[-1], response)]
+ ## data is factors only
+ stopifnot(all(sapply(data, is.factor)))
+
+ if (is.null(weights)) weights <- rep(1L, nrow(data))
+ ## weights are case weights, i.e., integers
+ stopifnot(length(weights) == nrow(data) &
+ max(abs(weights - floor(weights))) < .Machine$double.eps)
+
+ ## grow tree
+ nodes <- growtree(id = 1L, response, data, weights)
+
+ ## compute terminal node number for each observation
+ fitted <- fitted_node(nodes, data = data)
+
+ ## return rich constparty object
+ ret <- party(nodes, data = data,
+```
Constant Partying: Growing and Handling Trees with Constant Fits
```r
+ fitted = data.frame("(fitted)" = fitted,
+ "(response)" = data[[response]],
+ "(weights)" = weights,
+ check.names = FALSE),
+ terms = terms(formula))
+ as.constparty(ret)
+ }
The call to the constructor `party()` sets up a “party” object with the tree structure contained in `nodes`, the training samples in `data` and the corresponding `terms` object. Class “constparty” inherits all slots from class “party” and has an additional `fitted` slot for storing the terminal node numbers for each sample in the training data, the response variable(s) and case weights. The `fitted` slot is a “data.frame” containing three variables: The fitted terminal node identifiers "(fitted)", an integer vector of the same length as `data`; the response variables "(response)" as a vector (or `data.frame` for multivariate responses) with the same number of observations; and optionally a vector of weights "(weights)". The additional `fitted` slot allows to compute arbitrary summary measures for each terminal node by simply subsetting the "(response)" and "(weights)" slots by "(fitted)" before computing (weighted) means, medians, empirical cumulative distribution functions, Kaplan-Meier estimates or whatever summary statistic might be appropriate for a certain response. The `print()`, `plot()`, and `predict()` methods for class “constparty” work this way with suitable defaults for the summary statistics depending on the class of the response(s).
We now can fit this tree to the Titanic data; the `print()` method provides us with a first overview on the resulting model
```r
> (myttnc <- mytree(Survived ~ Class + Age + Gender, data = ttnc))
Model formula:
Survived ~ Class + Age + Gender
Fitted party:
[1] root
| [2] Gender in Male
| | [3] Class in 1st
| | | [4] Age in Child: Yes (n = 5, err = 0.0%)
| | | [5] Age in Adult: No (n = 175, err = 32.6%)
| | | [7] Age in Child
| | | | [8] Class in 2nd: Yes (n = 11, err = 0.0%)
| | | | [9] Class in 3rd: No (n = 48, err = 27.1%)
| | | [10] Age in Adult
| | | | [11] Class in Crew: No (n = 862, err = 22.3%)
| | | | [12] Class in 2nd, 3rd: No (n = 630, err = 14.1%)
| | [13] Gender in Female
| | | [14] Class in 3rd: No (n = 196, err = 45.9%)
| | | [15] Class in 1st, 2nd, Crew: Yes (n = 274, err = 7.3%)
```
Figure 5: Classification tree fitted by the `mytree()` function to the `ttnc` data.
Number of inner nodes: 7
Number of terminal nodes: 8
Of course, we can immediately use `plot(myttnc)` to obtain a graphical representation of this tree, the result is given in Figure 5. The default behavior for trees with categorical responses is simply inherited from “constparty” and hence we readily obtain bar plots in all terminal nodes.
As the tree is fairly large, we might be interested in pruning the tree to a more reasonable size. For this purpose the `partykit` package provides the `nodeprune()` function that can prune back to nodes with selected IDs. As `nodeprune()` (by design) does not provide a specific pruning criterion, we need to determine ourselves which nodes to prune. Here, one idea could be to impose significance at a higher level than the default $10^{-2}$ – say $10^{-5}$ to obtain a strongly pruned tree. Hence we use `nodeapply()` to extract the minimal Bonferroni-corrected $p$-value from all inner nodes:
```r
> nid <- nodeids(myttnc)
> iid <- nid[!(nid %in% nodeids(myttnc, terminal = TRUE))]
> (pval <- unlist(nodeapply(myttnc, ids = iid,
+ FUN = function(n) info_node(n)$p.value)))
```
```
1 2 3 4 5 6 7
0.000000e+00 2.965383e-06 1.756527e-03 6.933623e-05 8.975754e-06
10 13
2.992870e-05 0.000000e+00
```
Then, the pruning of the nodes with the larger p-values can be simply carried out by
```r
> myttnc2 <- nodeprune(myttnc, ids = iid[pval > 1e-5])
```
The corresponding visualization is shown in Figure 6.
The accuracy of the tree built using the default options could be assessed by the bootstrap, for example. Here, we want to compare our tree for the Titanic survivor data with a simple logistic regression model. First, we fit this simple GLM and compute the (in-sample) log-likelihood:
```r
> logLik(glm(Survived ~ Class + Age + Gender, data = ttnc, + family = binomial()))
'log Lik.' -1105.031 (df=6)
```
For our tree, we set-up 25 bootstrap samples
```r
> bs <- rmultinom(25, nrow(ttnc), rep(1, nrow(ttnc)) / nrow(ttnc))
```
and implement the log-likelihood of a binomal model
```r
> bloglik <- function(prob, weights) + sum(weights * dbinom(ttnc$Survived == "Yes", size = 1, + prob[,"Yes"], log = TRUE))
```
What remains to be done is to iterate over all bootstrap samples, to refit the tree on the bootstrap sample and to evaluate the log-likelihood on the out-of-bootstrap samples based on the trees’ predictions (details on how to compute predictions are given in the next section):
> f <- function(w) {
+ tr <- mytree(Survived ~ Class + Age + Gender, data = ttnc, weights = w)
+ bloglik(predict(tr, newdata = ttnc, type = "prob"), as.numeric(w == 0))
+ }
> apply(bs, 2, f)
[25] -389.4687
We see that the in-sample log-likelihood of the linear logistic regression model is much smaller than the out-of-sample log-likelihood found for our tree and thus we can conclude that our tree-based approach fits data the better than the linear model.
4. Predictions
As argued in Section 1 arbitrary types of predictions can be computed from "constparty" objects because the full empirical distribution of the response in the learning sample nodes is available. All of these can be easily computed in the predict() method for "constparty" objects by supplying a suitable aggregation function. However, as certain types of predictions are much more commonly used, these are available even more easily by setting a type argument.
The prediction type can either be "node", "response", or "prob" (see Table 1). The idea is that "response" always returns a prediction of the same class as the original response and "prob" returns some object that characterizes the entire empirical distribution. Hence, for different response classes, different types of predictions are produced, see Table 1 for an overview. Additionally, for “numeric” responses type = "quantile" and type = "density" is available. By default, these return functions for computing predicted quantiles and probability densities, respectively, but optionally these functions can be directly evaluated at given values and then return a vector/matrix.
Here, we illustrate all different predictions for all possible combinations of the explanatory factor levels.
> nttnc <- expand.grid(Class = levels(ttnc$Class),
+ Gender = levels(ttnc$Gender), Age = levels(ttnc$Age))
> nttnc
<table>
<thead>
<tr>
<th>Response class</th>
<th>type = "node"</th>
<th>type = "response"</th>
<th>type = "prob"</th>
</tr>
</thead>
<tbody>
<tr>
<td>"factor"</td>
<td>terminal node number</td>
<td>majority class</td>
<td>class probabilities</td>
</tr>
<tr>
<td>"numeric"</td>
<td>terminal node number</td>
<td>mean</td>
<td>ECDF</td>
</tr>
<tr>
<td>"Surv"</td>
<td>terminal node number</td>
<td>median survival time</td>
<td>Kaplan-Meier</td>
</tr>
</tbody>
</table>
Table 1: Overview on type of predictions computed by the predict() method for “constparty” objects. For multivariate responses, combinations thereof are returned.
Class Gender Age
1 1st Male Child
2 2nd Male Child
3 3rd Male Child
4 Crew Male Child
5 1st Female Child
6 2nd Female Child
7 3rd Female Child
8 Crew Female Child
9 1st Male Adult
10 2nd Male Adult
11 3rd Male Adult
12 Crew Male Adult
13 1st Female Adult
14 2nd Female Adult
15 3rd Female Adult
16 Crew Female Adult
The corresponding predicted nodes, modes, and probability distributions are:
> predict(myttnc, newdata = nttnc, type = "node")
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
4 8 9 9 15 15 14 15 5 12 12 11 15 15 14 15
> predict(myttnc, newdata = nttnc, type = "response")
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Yes Yes No Yes Yes Yes No Yes No No No No Yes Yes No Yes
Levels: No Yes
> predict(myttnc, newdata = nttnc, type = "prob")
No Yes
1 0.0000000 1.0000000
2 0.0000000 1.0000000
3 0.7291667 0.2708333
4 0.0000000 1.0000000
5 0.0729927 0.9270073
6 0.0729927 0.9270073
7 0.5408163 0.4591837
8 0.0729927 0.9270073
9 0.6742857 0.3257143
10 0.8587302 0.1412698
11 0.8587302 0.1412698
12 0.7772622 0.2227378
Furthermore, the `predict()` method features a `FUN` argument that can be used to compute customized predictions. If we are, say, interested in the rank of the probabilities for the two classes, we can simply specify a function that implements this feature:
```r
> predict(myttnc, newdata = nttnc, FUN = function(y, w)
+ rank(table(rep(y, w))))
```
<table>
<thead>
<tr>
<th>No</th>
<th>Yes</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
</tr>
<tr>
<td>4</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>2</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>7</td>
<td>2</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
</tr>
<tr>
<td>9</td>
<td>1</td>
</tr>
<tr>
<td>10</td>
<td>2</td>
</tr>
<tr>
<td>11</td>
<td>1</td>
</tr>
<tr>
<td>12</td>
<td>2</td>
</tr>
<tr>
<td>13</td>
<td>1</td>
</tr>
<tr>
<td>14</td>
<td>1</td>
</tr>
<tr>
<td>15</td>
<td>2</td>
</tr>
<tr>
<td>16</td>
<td>1</td>
</tr>
</tbody>
</table>
The user-supplied function `FUN` takes two arguments, `y` is the response and `w` is a vector of weights (case weights in this situation). Of course, it would have been easier to do these computations directly on the conditional class probabilities (`type = "prob"`), but the approach taken here for illustration generalizes to situations where this is not possible, especially for numeric responses.
### 5. Conclusion
The classes “`constparty`” and “`simpleparty`” introduced here can be used to represent trees with constant fits in the terminal nodes, including most of the traditional tree variants. For a number of implementations it is possible to convert the resulting trees to one of these classes, thus offering unified methods for handling constant-fit trees. User-extensible methods for printing and plotting these trees are available. Also, computing non-standard predictions, such as the median or empirical cumulative distribution functions, is easily possible within this framework. With the infrastructure provided in `partykit` it is rather straightforward to implement a new (or old) tree algorithm and therefore a prototype implementation of fancy ideas for improving trees is only a couple lines of R code away.
References
Affiliation:
Torsten Hothorn
Institut für Sozial- und Präventivmedizin, Abteilung Biostatistik
Universität Zürich
Hirschengraben 84
CH-8001 Zürich, Switzerland
E-mail: Torsten.Hothorn@R-project.org
URL: http://user.math.uzh.ch/hothorn/
Achim Zeileis
Department of Statistics
Faculty of Economics and Statistics
Universität Innsbruck
Universitätsstr. 15
6020 Innsbruck, Austria
E-mail: Achim.Zeileis@R-project.org
URL: http://eeecon.uibk.ac.at/~zeileis/
|
{"Source-Url": "http://dirichlet.mat.puc.cl/web/packages/partykit/vignettes/constparty.pdf", "len_cl100k_base": 9503, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 48771, "total-output-tokens": 11824, "length": "2e13", "weborganizer": {"__label__adult": 0.0003361701965332031, "__label__art_design": 0.0009794235229492188, "__label__crime_law": 0.0004580020904541016, "__label__education_jobs": 0.005016326904296875, "__label__entertainment": 0.0002105236053466797, "__label__fashion_beauty": 0.0002598762512207031, "__label__finance_business": 0.0009832382202148438, "__label__food_dining": 0.0004343986511230469, "__label__games": 0.0008978843688964844, "__label__hardware": 0.0009760856628417968, "__label__health": 0.00075531005859375, "__label__history": 0.0006990432739257812, "__label__home_hobbies": 0.0003147125244140625, "__label__industrial": 0.000957965850830078, "__label__literature": 0.0005311965942382812, "__label__politics": 0.0004630088806152344, "__label__religion": 0.00058746337890625, "__label__science_tech": 0.4658203125, "__label__social_life": 0.0003101825714111328, "__label__software": 0.04241943359375, "__label__software_dev": 0.47509765625, "__label__sports_fitness": 0.0004169940948486328, "__label__transportation": 0.0006856918334960938, "__label__travel": 0.0003027915954589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33689, 0.08366]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33689, 0.66431]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33689, 0.79766]], "google_gemma-3-12b-it_contains_pii": [[0, 2807, false], [2807, 5586, null], [5586, 7507, null], [7507, 7651, null], [7651, 8985, null], [8985, 10611, null], [10611, 10706, null], [10706, 12803, null], [12803, 14058, null], [14058, 14773, null], [14773, 17946, null], [17946, 19233, null], [19233, 20886, null], [20886, 23331, null], [23331, 24673, null], [24673, 25884, null], [25884, 28544, null], [28544, 29615, null], [29615, 31439, null], [31439, 33473, null], [33473, 33689, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2807, true], [2807, 5586, null], [5586, 7507, null], [7507, 7651, null], [7651, 8985, null], [8985, 10611, null], [10611, 10706, null], [10706, 12803, null], [12803, 14058, null], [14058, 14773, null], [14773, 17946, null], [17946, 19233, null], [19233, 20886, null], [20886, 23331, null], [23331, 24673, null], [24673, 25884, null], [25884, 28544, null], [28544, 29615, null], [29615, 31439, null], [31439, 33473, null], [33473, 33689, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33689, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33689, null]], "pdf_page_numbers": [[0, 2807, 1], [2807, 5586, 2], [5586, 7507, 3], [7507, 7651, 4], [7651, 8985, 5], [8985, 10611, 6], [10611, 10706, 7], [10706, 12803, 8], [12803, 14058, 9], [14058, 14773, 10], [14773, 17946, 11], [17946, 19233, 12], [19233, 20886, 13], [20886, 23331, 14], [23331, 24673, 15], [24673, 25884, 16], [25884, 28544, 17], [28544, 29615, 18], [29615, 31439, 19], [31439, 33473, 20], [33473, 33689, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33689, 0.0463]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
2b390d3b1a56004aec2fc46e9989e6770b59daca
|
Status of this Memo
This RFC is the specification of the MD4 Digest Algorithm. If you are going to implement MD4, it is suggested you do it this way. This memo is for informational use and does not constitute a standard. Distribution of this memo is unlimited.
Table of Contents
1. Abstract .................................................... 1
2. Terminology and Notation .................................... 2
3. MD4 Algorithm Description ................................... 2
4. Extensions .................................................. 6
5. Summary ..................................................... 7
6. Acknowledgements ............................................ 7
APPENDIX - Reference Implementation ............................. 7
Security Considerations.......................................... 18
Author’s Address................................................. 18
1. Abstract
This note describes the MD4 message digest algorithm. The algorithm takes as input an input message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message digest. The MD4 algorithm is thus ideal for digital signature applications, where a large file must be "compressed" in a secure manner before being signed with the RSA public-key cryptosystem.
The MD4 algorithm is designed to be quite fast on 32-bit machines. On a SUN Sparc station, MD4 runs at 1,450,000 bytes/second. On a DEC MicroVax II, MD4 runs at approximately 70,000 bytes/second. On a 20MHz 80286, MD4 runs at approximately 32,000 bytes/second. In addition, the MD4 algorithm does not require any large substitution tables; the algorithm can be coded quite compactly.
The MD4 algorithm is being placed in the public domain for review and possible adoption as a standard.
2. Terminology and Notation
In this note a "word" is a 32-bit quantity and a byte is an 8-bit quantity. A sequence of bits can be interpreted in a natural manner as a sequence of bytes, where each consecutive group of 8 bits is interpreted as a byte with the high-order (most significant) bit of each byte listed first. Similarly, a sequence of bytes can be interpreted as a sequence of 32-bit words, where each consecutive group of 4 bytes is interpreted as a word with the low-order (least significant) byte given first.
Let \( x_i \) denote "\( x \) sub \( i \)". If the subscript is an expression, we surround it in braces, as in \( x_{i+1} \). Similarly, we use ^ for superscripts (exponentiation), so that \( x^i \) denotes \( x \) to the \( i \)-th power.
Let the symbol "+" denote addition of words (i.e., modulo- \( 2^{32} \) addition). Let \( X <<< s \) denote the 32-bit value obtained by circularly shifting (rotating) \( X \) left by \( s \) bit positions. Let \( \text{not}(X) \) denote the bit-wise complement of \( X \), and let \( X \lor Y \) denote the bit-wise OR of \( X \) and \( Y \). Let \( X \oplus Y \) denote the bit-wise XOR of \( X \) and \( Y \), and let \( XY \) denote the bit-wise AND of \( X \) and \( Y \).
3. MD4 Algorithm Description
We begin by supposing that we have a b-bit message as input, and that we wish to find its message digest. Here \( b \) is an arbitrary nonnegative integer; \( b \) may be zero, it need not be a multiple of 8, and it may be arbitrarily large. We imagine the bits of the message written down as follows:
\[
m_0 \ m_1 \ldots \ m_{b-1}
\]
The following five steps are performed to compute the message digest of the message.
Step 1. Append padding bits
The message is "padded" (extended) so that its length (in bits) is congruent to 448, modulo 512. That is, the message is extended so that it is just 64 bits shy of being a multiple of 512 bits long. Padding is always performed, even if the length of the message is already congruent to 448, modulo 512 (in which case 512 bits of padding are added).
Padding is performed as follows: a single "1" bit is appended to the message, and then enough zero bits are appended so that the length in bits of the padded message becomes congruent to 448, modulo 512.
Step 2. Append length
A 64-bit representation of b (the length of the message before the padding bits were added) is appended to the result of the previous step. In the unlikely event that b is greater than 2^64, then only the low-order 64 bits of b are used. (These bits are appended as two 32-bit words and appended low-order word first in accordance with the previous conventions.)
At this point the resulting message (after padding with bits and with b) has a length that is an exact multiple of 512 bits. Equivalently, this message has a length that is an exact multiple of 16 (32-bit) words. Let M[0 ... N-1] denote the words of the resulting message, where N is a multiple of 16.
Step 3. Initialize MD buffer
A 4-word buffer (A,B,C,D) is used to compute the message digest. Here each of A,B,C,D are 32-bit registers. These registers are initialized to the following values in hexadecimal, low-order bytes first):
- word A: 01 23 45 67
- word B: 89 ab cd ef
- word C: fe dc ba 98
- word D: 76 54 32 10
Step 4. Process message in 16-word blocks
We first define three auxiliary functions that each take as input three 32-bit words and produce as output one 32-bit word.
\[
\begin{align*}
f(X,Y,Z) &= XY \lor \neg(X)Z \\
g(X,Y,Z) &= XY \lor XZ \lor YZ \\
h(X,Y,Z) &= X \oplus Y \oplus Z
\end{align*}
\]
In each bit position \(f\) acts as a conditional: if \(x\) then \(y\) else \(z\). (The function \(f\) could have been defined using + instead of \(\lor\) since \(XY\) and \(\neg(X)Z\) will never have 1’s in the same bit position.) In each bit position \(g\) acts as a majority function: if at least two of \(x, y, z\) are on, then \(g\) has a one in that bit position, else \(g\) has a zero. It is interesting to note that if
the bits of $X$, $Y$, and $Z$ are independent and unbiased, the each bit of $f(X,Y,Z)$ will be independent and unbiased, and similarly each bit of $g(X,Y,Z)$ will be independent and unbiased. The function $h$ is the bit-wise "xor" or "parity" function; it has properties similar to those of $f$ and $g$.
Do the following:
For $i = 0$ to $N/16-1$ do /* process each 16-word block */
For $j = 0$ to $15$ do: /* copy block $i$ into $X$ */
Set $X[j]$ to $M[i*16+j]$.
end /* of loop on $j$ */
Save $A$ as $AA$, $B$ as $BB$, $C$ as $CC$, and $D$ as $DD$.
[Round 1]
Let $[A B C D i s]$ denote the operation
$A = (A + f(B,C,D) + X[i]) <<< s$ .
Do the following 16 operations:
$[A B C D 0 3]$
$[D A B C 1 7]$
$[C D A B 2 11]$
$[B C D A 3 19]$
$[A B C D 4 3]$
$[D A B C 5 7]$
$[C D A B 6 11]$
$[B C D A 7 19]$
$[A B C D 8 3]$
$[D A B C 9 7]$
$[C D A B 10 11]$
$[B C D A 11 19]$
$[A B C D 12 3]$
$[D A B C 13 7]$
$[C D A B 14 11]$
$[B C D A 15 19]$
[Round 2]
Let $[A B C D i s]$ denote the operation
$A = (A + g(B,C,D) + X[i] + 5A827999) <<< s$ .
(The value $5A..99$ is a hexadecimal 32-bit constant, written with the high-order digit first. This constant represents the square root of 2. The octal value of this constant is $013240474631$. See Knuth, The Art of Programming, Volume 2 (Seminumerical Algorithms), Second Edition (1981), Addison-Wesley. Table 2, page 660.)
Do the following 16 operations:
$[A B C D 0 3]$
[Round 3]
Let \([A B C D i s]\) denote the operation
\[A = (A + h(B,C,D) + X[i] + 6ED9EBA1) \ll s.\]
(The value 6E..A1 is a hexadecimal 32-bit constant, written with the high-order digit first. This constant represents the square root of 3. The octal value of this constant is 015666365641. See Knuth, The Art of Programming, Volume 2 (Seminumerical Algorithms), Second Edition (1981), Addison-Wesley. Table 2, page 660.)
Do the following 16 operations:
\[
\begin{align*}
[A B C D 0 3] \\
[D A B C 8 9] \\
[C D A B 4 11] \\
[B C D A 12 15] \\
[A B C D 2 3] \\
[D A B C 10 9] \\
[C D A B 6 11] \\
[B C D A 14 15] \\
[A B C D 1 3] \\
[D A B C 9 9] \\
[C D A B 5 11] \\
[B C D A 13 15] \\
[A B C D 3 3] \\
[D A B C 11 9] \\
[C D A B 7 11] \\
[B C D A 15 15]
\end{align*}
\]
Then perform the following additions:
\[
\begin{align*}
A &= A + AA \\
B &= B + BB
\end{align*}
\]
\[ \begin{align*}
\text{C} &= \text{C} + \text{CC} \\
\text{D} &= \text{D} + \text{DD}
\end{align*} \]
(That is, each of the four registers is incremented by the value it had before this block was started.)
end /* of loop on i */
Step 5. Output
The message digest produced as output is A,B,C,D. That is, we begin with the low-order byte of A, and end with the high-order byte of D.
This completes the description of MD4. A reference implementation in C is given in the Appendix.
4. Extensions
If more than 128 bits of output are required, then the following procedure is recommended to obtain a 256-bit output. (There is no provision made for obtaining more than 256 bits.)
Two copies of MD4 are run in parallel over the input. The first copy is standard as described above. The second copy is modified as follows.
The initial state of the second copy is:
- word A: 00 11 22 33
- word B: 44 55 66 77
- word C: 88 99 aa bb
- word D: cc dd ee ff
The magic constants in rounds 2 and 3 for the second copy of MD4 are changed from \( \sqrt{2} \) and \( \sqrt{3} \) to \( \sqrt[3]{2} \) and \( \sqrt[3]{3} \):
<table>
<thead>
<tr>
<th>Octal</th>
<th>Hex</th>
</tr>
</thead>
<tbody>
<tr>
<td>Round 2 constant</td>
<td>012050505746 50a28be6</td>
</tr>
<tr>
<td>Round 3 constant</td>
<td>013423350444 5c4dd124</td>
</tr>
</tbody>
</table>
Finally, after every 16-word block is processed (including the last block), the values of the A registers in the two copies are exchanged.
The final message digest is obtained by appending the result of the second copy of MD4 to the end of the result of the first copy of MD4.
5. Summary
The MD4 message digest algorithm is simple to implement, and provides a "fingerprint" or message digest of a message of arbitrary length.
It is conjectured that the difficulty of coming up with two messages having the same message digest is on the order of $2^{64}$ operations, and that the difficulty of coming up with any message having a given message digest is on the order of $2^{128}$ operations. The MD4 algorithm has been carefully scrutinized for weaknesses. It is, however, a relatively new algorithm and further security analysis is of course justified, as is the case with any new proposal of this sort. The level of security provided by MD4 should be sufficient for implementing very high security hybrid digital signature schemes based on MD4 and the RSA public-key cryptosystem.
6. Acknowledgements
I’d like to thank Don Coppersmith, Burt Kaliski, Ralph Merkle, and Noam Nisan for numerous helpful comments and suggestions.
APPENDIX - Reference Implementation
This appendix contains the following files:
```
md4.h -- header file for using MD4 implementation
md4.c -- the source code for MD4 routines
md4driver.c -- a sample "user" routine
session -- sample results of running md4driver
```
/*
** ********************************************************************
** md4.h -- Header file for implementation of
** MD4 Message Digest Algorithm
** Updated: 2/13/90 by Ronald L. Rivest
** (C) 1990 RSA Data Security, Inc.
** ********************************************************************
*/
typedef struct {
unsigned int buffer[4]; /* Holds 4-word result of MD computation */
unsigned char count[8]; /* Number of bits processed so far */
unsigned int done; /* Nonzero means MD computation finished */
} MDstruct, *MDptr;
/* MDbegin(MD) */
** Input: MD -- an MDptr
** Initialize the MDstruct preparatory to doing a message digest
** computation.
*/
extern void MDbegin();
/* MDupdate(MD,X,count)
** Input: MD -- an MDptr
** X -- a pointer to an array of unsigned characters.
** count -- the number of bits of X to use (an unsigned int).
** Updates MD using the first "count" bits of X.
** The array pointed to by X is not modified.
** If count is not a multiple of 8, MDupdate uses high bits of
** last byte.
** This is the basic input routine for a user.
** The routine terminates the MD computation when count < 512, so
** every MD computation should end with one call to MDupdate with a
** count less than 512. Zero is OK for a count.
*/
extern void MDupdate();
/* MDprint(MD)
** Input: MD -- an MDptr
** Prints message digest buffer MD as 32 hexadecimal digits.
** Order is from low-order byte of buffer[0] to high-order byte
** of buffer[3].
** Each byte is printed with high-order hexadecimal digit first.
*/
extern void MDprint();
/*
** End of md4.h
***************************************************************************
*/
/* md4.c -- Implementation of MD4 Message Digest Algorithm
** Updated: 2/16/90 by Ronald L. Rivest
** (C) 1990 RSA Data Security, Inc.
***************************************************************************
*/
/* To use MD4:
** -- Include md4.h in your program
** -- Declare an MDstruct MD to hold the state of the digest
** computation.
** -- Initialize MD using MDbegin(&MD)
-- For each full block (64 bytes) X you wish to process, call
MDupdate(&MD,X,512)
(512 is the number of bits in a full block.)
-- For the last block (less than 64 bytes) you wish to process,
MDupdate(&MD,X,n)
where n is the number of bits in the partial block. A partial
block terminates the computation, so every MD computation
should terminate by processing a partial block, even if it
has n = 0.
-- The message digest is available in MD.buffer[0] ...
MD.buffer[3]. (Least-significant byte of each word
should be output first.)
-- You can print out the digest using MDprint(&MD)
/* Implementation notes:
** This implementation assumes that ints are 32-bit quantities.
** If the machine stores the least-significant byte of an int in the
** least-addressed byte (e.g., VAX and 8086), then LOWBYTEFIRST
** should be set to TRUE. Otherwise (e.g., SUNS), LOWBYTEFIRST
** should be set to FALSE. Note that on machines with LOWBYTEFIRST
** FALSE the routine MDupdate modifies has a side-effect on its input
** array (the order of bytes in each word are reversed). If this is
** undesired a call to MDreverse(X) can reverse the bytes of X back
** into order after each call to MDupdate.
*/
#define TRUE 1
#define FALSE 0
#define LOWBYTEFIRST FALSE
/* Compile-time includes
*/
#include <stdio.h>
#include "md4.h"
/* Compile-time declarations of MD4 "magic constants".
*/
#define I0 0x67452301 /* Initial values for MD buffer */
#define I1 0xefcdab89
#define I2 0x98badcfe
#define I3 0x10325476
#define C2 013240474631 /* round 2 constant = sqrt(2) in octal */
#define C3 015666365641 /* round 3 constant = sqrt(3) in octal */
/* C2 and C3 are from Knuth, The Art of Programming, Volume 2
** Table 2, page 660.
*/
#define fs1 3 /* round 1 shift amounts */
#define fs2 7
#define fs3 11
#define fs4 19
#define gs1 3 /* round 2 shift amounts */
#define gs2 5
#define gs3 9
#define gs4 13
#define hs1 3 /* round 3 shift amounts */
#define hs2 9
#define hs3 11
#define hs4 15
/* Compile-time macro declarations for MD4.
** Note: The "rot" operator uses the variable "tmp".
** It assumes tmp is declared as unsigned int, so that the >>
** operator will shift in zeros rather than extending the sign bit.
*/
#define f(X,Y,Z) ((X&Y) | ((˜X)&Z))
#define g(X,Y,Z) ((X&Y) | (X&Z) | (Y&Z))
#define h(X,Y,Z) (X^Y^Z)
#define rot(X,S) (tmp=X,(tmp<<S) | (tmp>>(32-S)))
#define ff(A,B,C,D,i,s) A = rot((A + f(B,C,D) + X[i]),s)
#define gg(A,B,C,D,i,s) A = rot((A + g(B,C,D) + X[i] + C2),s)
#define hh(A,B,C,D,i,s) A = rot((A + h(B,C,D) + X[i] + C3),s)
/* MDprint(MDp)
** Print message digest buffer MDp as 32 hexadecimal digits.
** Order is from low-order byte of buffer[0] to high-order byte of
** buffer[3].
** Each byte is printed with high-order hexadecimal digit first.
** This is a user-callable routine.
*/
void
MDprint(MDp)
MDptr MDp;
{ int i,j;
for (i=0;i<4;i++)
for (j=0;j<32;j=j+8)
printf("%02x",(MDp->buffer[i]>>j) & 0xFF);
}
/* MDbegin(MDp)
** Initialize message digest buffer MDp.
** This is a user-callable routine.
*/
void
MDbegin(MDp)
MDptr MDp;
{
int i;
MDp->buffer[0] = I0;
MDp->buffer[1] = I1;
MDp->buffer[2] = I2;
MDp->buffer[3] = I3;
for (i=0; i<8; i++) MDp->count[i] = 0;
MDp->done = 0;
}
/* MDreverse(X)
** Reverse the byte-ordering of every int in X.
** Assumes X is an array of 16 ints.
** The macro revx reverses the byte-ordering of the next word of X.
*/
#define revx { t = (*X << 16) | (*X >> 16);
*X++ = ((t & 0xFF00FF00) >> 8) | ((t & 0x00FF00FF) << 8); }
MDreverse(X)
unsigned int *X;
{
register unsigned int t;
revx; revx; revx; revx; revx; revx; revx; revx;
revx; revx; revx; revx; revx; revx; revx; revx;
}
/* MDblock(MDp,X)
** Update message digest buffer MDp->buffer using 16-word data block X.
** Assumes all 16 words of X are full of data.
** Does not update MDp->count.
** This routine is not user-callable.
*/
static void
MDblock(MDp,X)
MDptr MDp;
unsigned int *X;
{
register unsigned int tmp, A, B, C, D;
#if LOWBYTEFIRST == FALSE
MDreverse(X);
#endif
A = MDp->buffer[0];
B = MDp->buffer[1];
C = MDp->buffer[2];
D = MDp->buffer[3];
/* Update the message digest buffer */
ff(A , B , C , D , 0 , fs1); /* Round 1 */
ff(D , A , B , C , 1 , fs2);
ff(C , D , A , B , 2 , fs3);
ff(B , C , D , A , 3 , fs4);
ff(A, B, C, D, 4, fs1);
ff(D, A, B, C, 5, fs2);
ff(C, D, A, B, 6, fs3);
ff(B, C, D, A, 7, fs4);
ff(A, B, C, D, 8, fs1);
ff(D, A, B, C, 9, fs2);
ff(C, D, A, B, 10, fs3);
ff(B, C, D, A, 11, fs4);
ff(A, B, C, D, 12, fs1);
ff(D, A, B, C, 13, fs2);
ff(C, D, A, B, 14, fs3);
ff(B, C, D, A, 15, fs4);
gg(A, B, C, D, 0, gs1); /* Round 2 */
gg(D, A, B, C, 4, gs2);
gg(C, D, A, B, 8, gs3);
gg(B, C, D, A, 12, gs4);
gg(A, B, C, D, 1, gs1);
gg(D, A, B, C, 5, gs2);
gg(C, D, A, B, 9, gs3);
gg(B, C, D, A, 13, gs4);
gg(A, B, C, D, 2, gs1);
gg(D, A, B, C, 6, gs2);
gg(C, D, A, B, 10, gs3);
gg(B, C, D, A, 14, gs4);
gg(A, B, C, D, 3, gs1);
gg(D, A, B, C, 7, gs2);
gg(C, D, A, B, 11, gs3);
gg(B, C, D, A, 15, gs4);
hh(A, B, C, D, 0, hs1); /* Round 3 */
hh(D, A, B, C, 8, hs2);
hh(C, D, A, B, 4, hs3);
hh(B, C, D, A, 12, hs4);
hh(A, B, C, D, 2, hs1);
hh(D, A, B, C, 10, hs2);
hh(C, D, A, B, 6, hs3);
hh(B, C, D, A, 14, hs4);
hh(A, B, C, D, 1, hs1);
hh(D, A, B, C, 9, hs2);
hh(C, D, A, B, 5, hs3);
hh(B, C, D, A, 13, hs4);
hh(A, B, C, D, 3, hs1);
hh(D, A, B, C, 11, hs2);
hh(C, D, A, B, 7, hs3);
hh(B, C, D, A, 15, hs4);
MDp->buffer[0] += A;
MDp->buffer[1] += B;
MDp->buffer[2] += C;
MDp->buffer[3] += D;
/* MDupdate(MDp,X,count)
** Input: MDp -- an MDptr
** X -- a pointer to an array of unsigned characters.
** count -- the number of bits of X to use.
** (if not a multiple of 8, uses high bits of last byte.)
** Update MDp using the number of bits of X given by count.
** This is the basic input routine for an MD4 user.
** The routine completes the MD computation when count < 512, so
** every MD computation should end with one call to MDupdate with a
** count less than 512. A call with count 0 will be ignored if the
** MD has already been terminated (done != 0), so an extra call with
** count 0 can be given as a "courtesy close" to force termination
** if desired.
*/
void
MDupdate(MDp,X,count)
MDptr MDp;
unsigned char *X;
unsigned int count;
{
unsigned int i, tmp, bit, byte, mask;
unsigned char XX[64];
unsigned char *p;
/* return with no error if this is a courtesy close with count
** zero and MDp->done is true.
*/
if (count == 0 && MDp->done) return;
/* check to see if MD is already done and report error */
if (MDp->done)
{ printf("\nError: MDupdate MD already done."); return; }
/* Add count to MDp->count */
tmp = count;
p = MDp->count;
while (tmp)
{ tmp += *p;
*p++ = tmp;
tmp = tmp >> 8;
}
/* Process data */
if (count == 512)
{ /* Full block of data to handle */
MDblock(MDp,(unsigned int *)X);
}
else if (count > 512) /* Check for count too large */
{ printf("\nError: MDupdate called with illegal count value %d."_,count);
return;
}
else /* partial block -- must be last block so finish up */
{
/* Find out how many bytes and residual bits there are */
byte = count >> 3;
bit = count & 7;
/* Copy X into XX since we need to modify it */
for (i=0; i<byte; i++) XX[i] = X[i];
for (i=byte+1; i<64; i++) XX[i] = 0;
/* Add padding '1' bit and low-order zeros in last byte */
mask = 1 << (7 - bit);
XX[byte] = (XX[byte] | mask) & ~(mask - 1);
/* If room for bit count, finish up with this block */
if (byte <= 55)
{
for (i=0; i<8; i++) XX[56+i] = MDp->count[i];
MDblock(MDp, (unsigned int *)XX);
}
else /* need to do two blocks to finish up */
{
MDblock(MDp, (unsigned int *)XX);
for (i=0; i<56; i++) XX[i] = 0;
for (i=0; i<8; i++) XX[56+i] = MDp->count[i];
MDblock(MDp, (unsigned int *)XX);
}
/* Set flag saying we're done with MD computation */
MDp->done = 1;
}
*/
** End of md4.c
**************************************************************************
#include <stdio.h>
#include "md4.h"
/* MDtimetrial()
** A time trial routine, to measure the speed of MD4.
** Measures speed for 1M blocks = 64M bytes.
*/
MDtimetrial()
{ unsigned int X[16];
MDstruct MD;
int i;
double t;
for (i=0;i<16;i++) X[i] = 0x01234567 + i;
printf("MD4 time trial. Processing 1 million 64-character blocks...
" );
clock();
MDbegin(&MD);
for (i=0;i<1000000;i++) MDupdate(&MD,X,512);
MDupdate(&MD,X,0);
t = (double) clock(); /* in microseconds */
MDprint(&MD); printf(" is digest of 64M byte test input.
" );
printf("Seconds to process test input: %g
",t/1e6);
printf("Characters processed per second: %ld.
",(int)(64e12/t));
}
/* MDstring(s)
** Computes the message digest for string s.
** Prints out message digest, a space, the string (in quotes) and a
** carriage return.
*/
MDstring(s)
unsigned char *s;
{ unsigned int i, len = strlen(s);
MDstruct MD;
MDbegin(&MD);
for (i=0;i+64<=len;i=i+64) MDupdate(&MD,s+i,512);
MDupdate(&MD,s+i,(len-i)*8);
MDprint(&MD);
printf(" "%s"
",s);
}
/* MDfile(filename)
** Computes the message digest for a specified file.
** Prints out message digest, a space, the file name, and a
** carriage return.
*/
MDfile(filename)
char *filename;
{ FILE *f = fopen(filename,"rb");
unsigned char X[64];
MDstruct MD;
int b;
if (f == NULL) {
printf("%s can’t be opened.
",filename); return; }
MDbegin(&MD);
while ((b=fread(X,1,64,f))!=0) MDupdate(&MD,X,b*8);
}
MDupdate(&MD,X,0);
MDprint(&MD);
printf(" %s
",filename);
fclose(f);
}
/* MDfilter()*/
** Writes the message digest of the data from stdin onto stdout, **
** followed by a carriage return. */
MDfilter()
{ unsigned char X[64];
MDstruct MD;
int b;
MDbegin(&MD);
while ((b=fread(X,1,64,stdin))!=0) MDupdate(&MD,X,b*8);
MDupdate(&MD,X,0);
MDprint(&MD);
printf("\n");
}
/* MDtestsuite() */
** Run a standard suite of test data. */
MDtestsuite()
{ printf("MD4 test suite results:\n");
MDstring("");
MDstring("a");
MDstring("abc");
MDstring("message digest");
MDstring("abcdefghijklmnopqrstuvwxyz");
MDstring("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789");
MDfile("foo"); /* Contents of file foo are "abc" */
}
main(argc,argv)
int argc;
char *argv[];
{ int i;
/* For each command line argument in turn: */
** filename -- prints message digest and name of file **
** -sstring -- prints message digest and contents of string **
** -t -- prints time trial statistics for 64M bytes **
** -x -- execute a standard suite of test data **
** (no args) -- writes messages digest of stdin onto stdout */
}
if (argc==1) MDfilter();
else
for (i=1;i<argc;i++)
if (argv[i][0]=='-' && argv[i][1]=='s') MDstring(argv[i]+2);
else if (strcmp(argv[i],"-t")==0) MDtimetrial();
else if (strcmp(argv[i],"-x")==0) MDeestsuite();
else MDfile(argv[i]);
/*@
** end of md4driver.c
*************************************************************/
--- Sample session. Compiling and using MD4 on SUN Sparcstation ---
>ls
total 66
-rw-rw-r-- 1 rivest 3 Feb 14 17:40 abcfile
-rwxrwxr-x 1 rivest 24576 Feb 17 12:28 md4
-rw-rw-r-- 1 rivest 9347 Feb 17 00:37 md4.c
-rw-r--r-- 1 rivest 25150 Feb 17 12:25 md4.doc
-rw-rw-r-- 1 rivest 1844 Feb 16 21:21 md4.h
-rw-rw-r-- 1 rivest 3497 Feb 17 12:27 md4driver.c
>
>cc -o md4 -O4 md4.c md4driver.c
md4.c:
md4driver.c:
Linking:
>
>md4 -x
MD4 test suite results:
31d6cfe0d16ae931b73c59d7e0c089c0 ""
be52cb31de3e46245e05fbd66fb24 "a"
a448017aaf21d8525fc10ae87aa6729d "abc"
d9130a8164549fe8188748061ec7014b "message digest"
d79e1c308a5bcbdeea8ed63df412da9 "abcdefghijklmnopqrstuvwxyz"
043f8582f241db351ce627e153e7f0e4 "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"
a448017aaf21d8525fc10ae87aa6729d abcfile
>
>md4 -sabc -shi
a448017aaf21d8525fc10ae87aa6729d "abc"
cf222512bd25eb033236f0cd054e308 "hi"
>
>md4 *
a448017aaf21d8525fc10ae87aa6729d abcfile
> cat abcfile | md4
> a448017aaf21d8525fc10ae87aa6729d
> >
> md4 -t
> MD4 time trial. Processing 1 million 64-character blocks...
> 6325bf77e5891c7c0d8104b64cc6e9ef is digest of 64M byte test input.
> Seconds to process test input: 44.0982
> Characters processed per second: 1451305.
> >
> ------------------------ end of sample session --------------------
Note: A version of this document including the C source code is available for FTP from THEORY.LSC.MIT.EDU in the file "md4.doc".
Security Considerations
The level of security discussed in this memo by MD4 is considered to be sufficient for implementing very high security hybrid digital signature schemes based on MD4 and the RSA public-key cryptosystem.
Author’s Address
Ronald L. Rivest
Massachusetts Institute of Technology
Laboratory for Computer Science
NE43–324
545 Technology Square
Cambridge, MA 02139–1986
Phone: (617) 253–5880
EMail: rivest@theory.lcs.mit.edu
|
{"Source-Url": "http://www.rfc-editor.org/pdfrfc/rfc1186.txt.pdf", "len_cl100k_base": 8229, "olmocr-version": "0.1.48", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 36174, "total-output-tokens": 10043, "length": "2e13", "weborganizer": {"__label__adult": 0.00042366981506347656, "__label__art_design": 0.0003590583801269531, "__label__crime_law": 0.0005540847778320312, "__label__education_jobs": 0.00027751922607421875, "__label__entertainment": 7.265806198120117e-05, "__label__fashion_beauty": 0.0001634359359741211, "__label__finance_business": 0.00035071372985839844, "__label__food_dining": 0.0004367828369140625, "__label__games": 0.0007271766662597656, "__label__hardware": 0.003437042236328125, "__label__health": 0.0005674362182617188, "__label__history": 0.000255584716796875, "__label__home_hobbies": 0.00010895729064941406, "__label__industrial": 0.000743865966796875, "__label__literature": 0.0001894235610961914, "__label__politics": 0.0003230571746826172, "__label__religion": 0.0005817413330078125, "__label__science_tech": 0.053802490234375, "__label__social_life": 6.014108657836914e-05, "__label__software": 0.007354736328125, "__label__software_dev": 0.92822265625, "__label__sports_fitness": 0.0003635883331298828, "__label__transportation": 0.0006546974182128906, "__label__travel": 0.0002015829086303711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26622, 0.0583]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26622, 0.34099]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26622, 0.67172]], "google_gemma-3-12b-it_contains_pii": [[0, 1968, false], [1968, 4044, null], [4044, 6002, null], [6002, 7472, null], [7472, 8349, null], [8349, 9897, null], [9897, 11849, null], [11849, 13331, null], [13331, 15118, null], [15118, 16565, null], [16565, 17831, null], [17831, 19017, null], [19017, 20647, null], [20647, 21860, null], [21860, 23211, null], [23211, 24364, null], [24364, 25688, null], [25688, 26622, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1968, true], [1968, 4044, null], [4044, 6002, null], [6002, 7472, null], [7472, 8349, null], [8349, 9897, null], [9897, 11849, null], [11849, 13331, null], [13331, 15118, null], [15118, 16565, null], [16565, 17831, null], [17831, 19017, null], [19017, 20647, null], [20647, 21860, null], [21860, 23211, null], [23211, 24364, null], [24364, 25688, null], [25688, 26622, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26622, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26622, null]], "pdf_page_numbers": [[0, 1968, 1], [1968, 4044, 2], [4044, 6002, 3], [6002, 7472, 4], [7472, 8349, 5], [8349, 9897, 6], [9897, 11849, 7], [11849, 13331, 8], [13331, 15118, 9], [15118, 16565, 10], [16565, 17831, 11], [17831, 19017, 12], [19017, 20647, 13], [20647, 21860, 14], [21860, 23211, 15], [23211, 24364, 16], [24364, 25688, 17], [25688, 26622, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26622, 0.00635]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
52e9dc5aef0d15e22d58ec339f433609de36d96a
|
[REMOVED]
|
{"Source-Url": "http://www.cse.chalmers.se/~bergert/paper/2021-bookchapter-roboticslanguages.pdf", "len_cl100k_base": 14302, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 84411, "total-output-tokens": 20960, "length": "2e13", "weborganizer": {"__label__adult": 0.0004777908325195313, "__label__art_design": 0.0006537437438964844, "__label__crime_law": 0.0004651546478271485, "__label__education_jobs": 0.0018787384033203125, "__label__entertainment": 0.00011771917343139648, "__label__fashion_beauty": 0.0002307891845703125, "__label__finance_business": 0.0003421306610107422, "__label__food_dining": 0.00043320655822753906, "__label__games": 0.001644134521484375, "__label__hardware": 0.003406524658203125, "__label__health": 0.0006313323974609375, "__label__history": 0.0004849433898925781, "__label__home_hobbies": 0.00026679039001464844, "__label__industrial": 0.00133514404296875, "__label__literature": 0.0003311634063720703, "__label__politics": 0.0003840923309326172, "__label__religion": 0.0006375312805175781, "__label__science_tech": 0.1363525390625, "__label__social_life": 0.00010973215103149414, "__label__software": 0.00885772705078125, "__label__software_dev": 0.8388671875, "__label__sports_fitness": 0.00046443939208984375, "__label__transportation": 0.001583099365234375, "__label__travel": 0.0002455711364746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 83565, 0.03691]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 83565, 0.72394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 83565, 0.85825]], "google_gemma-3-12b-it_contains_pii": [[0, 1042, false], [1042, 4315, null], [4315, 6983, null], [6983, 10266, null], [10266, 13205, null], [13205, 17165, null], [17165, 19207, null], [19207, 22181, null], [22181, 24558, null], [24558, 26848, null], [26848, 27996, null], [27996, 30100, null], [30100, 31722, null], [31722, 34336, null], [34336, 35995, null], [35995, 37353, null], [37353, 39404, null], [39404, 40604, null], [40604, 43691, null], [43691, 46422, null], [46422, 49649, null], [49649, 53042, null], [53042, 53180, null], [53180, 54225, null], [54225, 57426, null], [57426, 60356, null], [60356, 61822, null], [61822, 65334, null], [65334, 67632, null], [67632, 70817, null], [70817, 74691, null], [74691, 78429, null], [78429, 80467, null], [80467, 83565, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1042, true], [1042, 4315, null], [4315, 6983, null], [6983, 10266, null], [10266, 13205, null], [13205, 17165, null], [17165, 19207, null], [19207, 22181, null], [22181, 24558, null], [24558, 26848, null], [26848, 27996, null], [27996, 30100, null], [30100, 31722, null], [31722, 34336, null], [34336, 35995, null], [35995, 37353, null], [37353, 39404, null], [39404, 40604, null], [40604, 43691, null], [43691, 46422, null], [46422, 49649, null], [49649, 53042, null], [53042, 53180, null], [53180, 54225, null], [54225, 57426, null], [57426, 60356, null], [60356, 61822, null], [61822, 65334, null], [65334, 67632, null], [67632, 70817, null], [70817, 74691, null], [74691, 78429, null], [78429, 80467, null], [80467, 83565, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 83565, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 83565, null]], "pdf_page_numbers": [[0, 1042, 1], [1042, 4315, 2], [4315, 6983, 3], [6983, 10266, 4], [10266, 13205, 5], [13205, 17165, 6], [17165, 19207, 7], [19207, 22181, 8], [22181, 24558, 9], [24558, 26848, 10], [26848, 27996, 11], [27996, 30100, 12], [30100, 31722, 13], [31722, 34336, 14], [34336, 35995, 15], [35995, 37353, 16], [37353, 39404, 17], [39404, 40604, 18], [40604, 43691, 19], [43691, 46422, 20], [46422, 49649, 21], [49649, 53042, 22], [53042, 53180, 23], [53180, 54225, 24], [54225, 57426, 25], [57426, 60356, 26], [60356, 61822, 27], [61822, 65334, 28], [65334, 67632, 29], [67632, 70817, 30], [70817, 74691, 31], [74691, 78429, 32], [78429, 80467, 33], [80467, 83565, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 83565, 0.07834]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
3ac26aa79ff25b60784e9aac67d8252665ed9e02
|
Using Model Checkers in an Introductory Course on Operating Systems
Roelof Hamberg
Embedded Systems Institute, Eindhoven, the Netherlands
Roelof.Hamberg@esi.nl
Frits Vaandrager*
Institute for Computing and Information Sciences
Radboud University Nijmegen, Nijmegen, the Netherlands
F.Vaandrager@cs.ru.nl
31st December 2007
Abstract
During the last three years, we have been experimenting with the use of the UPPAAL model checker in an introductory course on operating systems for first-year Computer Science students at the Radboud University Nijmegen. The course uses model checkers as a tool to explain, visualize and solve concurrency problems. Our experience is that students enjoy to play with model checkers because it makes concurrency issues tangible. Even though it is hard to measure objectively, we think that model checkers really help students to obtain a deeper insight into concurrency. In this article, we report on our experiences in the classroom, explain how mutual exclusion algorithms, semaphores and monitors can conveniently be modeled in UPPAAL, and present some results on properties of small, concurrent patterns.
1 Introduction
Each year, thousands of Computer Science students are exposed to introductory courses on operating systems and study one of the numerous textbooks in this area, for instance Tanenbaum & Woodhull [22], Stallings [20], Nutt [17], or Silberschatz & Galvin [19]. All these textbooks contain one or more chapters on principles of concurrency, with a discussion of fundamental concepts such as mutual exclusion algorithms, semaphores, monitors, message passing, deadlock and starvation.
*Supported by NWO/EW project 612.000.103 Fault-tolerant Real-time Algorithms Analyzed Incrementally (FRAAI).
For beginning students concurrency is a difficult subject. To begin with, it is hard to visualize dynamic concurrent behavior in a static book. As a reader one often needs four hands, like the Hindu god Vishnu, to simultaneously point at the different control locations of a concurrent program, as well as at the explanatory text. Usually, no correctness proofs are given in textbooks on operating systems. Authors do not want to bother their readers, i.e., the students, with tedious formal proofs, since this would distract attention from the key issues they want to get across. But contrary to their intuitive intention, this does not make life easy for students. Students know concurrency is tricky, that deadlocks, race conditions and starvation scenarios are hard to avoid, and that program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence [7]. But how then should they convince themselves of the correctness of concurrent algorithms and programs?
Also for instructors, grading assignments on concurrency poses major challenges. Students often come with “creative” solutions to concurrency problems in which, for instance, numerous semaphores are used in intricate ways. How to determine whether such solutions are correct? Many instructors will admit that frequently they give a student the maximal score, simply because they have not been able to spot any mistake. But this does not mean these solutions are correct!
Many experts agree that concurrency is the next major revolution in how we write software [21]. Applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most $\frac{1}{100}$ of such a chip’s potential throughput. This implies that concurrency should be a major topic in any course on operating systems. Race conditions, deadlock and starvation are not just things studied in a distant past by operating system pioneers such as Dijkstra: our students need a thorough understanding of these issues in order to be able to build the applications of tomorrow.
Model checking is emerging as a practical engineering tool for automated debugging of complex reactive systems such as embedded controllers and network protocols [4, 11, 2]. In model checking, required or hypothesized properties of the system are expressed as (temporal) logic formulas, and efficient symbolic algorithms are used to traverse the model defined by the system and check if the specified property holds or not. Extremely large state-spaces can often be traversed in minutes. We think that after 20 years of research on model checking this technology has become sufficiently mature and it is time to change the way in which we teach principles of concurrency:
1. Using the input language of model checkers it is straightforward to express concurrency algorithms in terms of networks of communicating
state machines. Algorithms are usually explained using pseudo code and/or text. However, for understanding algorithms it greatly helps to see how pseudo code and text correspond to precise automaton models and assertions about these models. By specifying state transitions, we make explicit which operations are atomic and which operations are not, a key issue in concurrent programming.
2. Using the (graphical) simulators provided by some modern model checkers it becomes easy to visualize the dynamics of concurrent algorithms, in particular traces of the evolving system in which mutual exclusion is violated, starvation occurs, etcetera.
3. Students may convince themselves of the correctness of algorithms without having to spend time on tedious, manual correctness proofs, which are of independent interest but belong in a different course: here the verification is done fully automatically by the model checker.
During the last three years, we have been experimenting with the use of the UPPAAL model checker in an introductory course on operating systems for first-year Computer Science students at the Radboud University Nijmegen. We decided not to tell our students about the wonderful theory and algorithms behind model checking, but to focus on how a model checker can be used to explain, visualize and solve concurrency problems. We told the students to view a model checker just like a pocket calculator: as a tool that does the math for you.
UPPAAL [2, 1] is an integrated tool environment for specification, validation and verification of systems modeled as networks of timed automata. It is available for free for non-profit applications at www.uppaal.com. The language for the new version UPPAAL 4.0 features a subset of the C programming language, a graphical user interface for specifying networks of extended finite state machines (EFSMs), and syntax for specifying timing constraints. We selected the tool because of its nice graphical user interface, which makes it very easy to use. In fact, after less than one hour of training students are able to make simple assignments.
Our experience is that students very much enjoy to play with model checkers because it makes concurrency issues tangible. Even though it is hard to measure objectively, we think that model checkers really help students to obtain a deeper insight into concurrency. Last year, for instance, students participating in our course discovered several deep mistakes in a published textbook [8], simply by modeling and analyzing proposed solutions from the book using UPPAAL.
In this article, we report on our experiences in the classroom, and explain how a variety of concurrency related concepts can be conveniently modeled in UPPAAL. Section 2 discusses models of some basic mutual exclusion algorithms, Section 3 is devoted to models of semaphores and concurrency
problems that use semaphore, and Section 4 presents models involving mon
itors. Finally, in Section 5, we present some conclusions and discuss related
work. All the models discussed in this article are available at the URL
2 Mutual Exclusion
Software solutions for the mutual exclusion problem are rarely used in prac
tice, since at the hardware level mutual exclusion can be realized using test
and-set or equivalent instructions. Nevertheless, most textbooks present
various concurrent programming solutions for mutual exclusion that have
been proposed in the literature, since this provides an excellent way to in
tr oduce students to some fundamental issues in concurrency. In our course,
we have been using UPPAAL to visualize and analyze the behavior of a num
er of mutual exclusion algorithms. As an illustration we discuss here two
models of Peterson’s algorithm.
In its original formulation, Peterson’s algorithm [18] is stated for two
processes P(0) and P(1) that work in parallel on a single resource. In
pseudo code, the algorithm for process P(pid) reads as follows:
while(true)
{
flag[pid] = true
turn = 1-pid
while( flag[1-pid] && turn == 1-pid );
// do nothing
// critical section
// end of critical section
flag[pid] = false
}
The algorithm uses three variables, flag[0], flag[1] and turn. A flag
value of 1 indicates that the process wants to enter the critical section. The
variable turn holds the pid of the process whose turn it is.
Figure 1 shows a UPPAAL model of process P(pid). As one can see, the
translation between pseudo code and UPPAAL is straightforward. Basically,
there is a location in the automaton for each line of code. However, a funda
mental aspect of the algorithm that is explicit in the UPPAAL model but left
implicit in the pseudo code, is that evaluation of the condition flag[1-pid]
&& turn == 1-pid is not atomic. It may happen, for instance, that first
process P(0) reads variable flag[1], subsequently process P(1) takes a
number of steps, and only after that process P(0) reads variable turn. The
model therefore contains two locations to capture the evaluation of the condition: in location **try2** process P(pid) reads variable \( \text{flag}[1{-}\text{pid}] \) and in location **try3** it reads variable \( \text{turn} \).
Figure 2 shows a screen dump of a simulation of Peterson’s algorithm in **UPPAAL**. Red dots indicate the current control location of each process. During simulation a user may manually select possible transitions, or perform a random simulation. A useful feature of **UPPAAL** is that counterexamples that have been found by the verifier can be replayed within the simulator. Using **UPPAAL**, it is trivial to verify that Peterson’s algorithm satisfies **mutual exclusion** indeed, that is, for all reachable states \( \forall[] \) in temporal logic notation it holds that \( P(0) \) and \( P(1) \) can not be in their critical section at the same time:
\[
A[] \ \text{not}( P(0).cs \text{ and } P(1).cs )
\]
**UPPAAL** also immediately finds a counterexample to the claim made in Wikipedia about the algorithm\(^1\) that “If \( P0 \) is in its critical section, then \( \text{flag}[0] \) is 1 and either \( \text{flag}[1] \) is false or \( \text{turn} \) is 0”. If we ask **UPPAAL** to check the corresponding property
\[
A[] \ P(0).cs \ \text{imply} \ (\text{flag}[0]==1 \ \&\& \ (\text{flag}[1]==0 \ \text{or} \ \text{turn]==0))
\]
it produces the obvious counterexample — which can be replayed in the simulator — in which first P(0) enters the critical section and then P(1) performs its first assignment.
An important property of Peterson’s algorithm is bounded waiting: a process will not wait longer than approximately one turn for entrance to the critical section. In order to state and prove this property in UPPAAL, we add timing constraints to the model: an upper bound l on the execution time of instructions, and an upper bound c on the critical section time. Figure 3 shows the enriched model. The idea is that each process has a local clock x, which is reset before entering a location. The invariant x <= 1 on location try0 ensures that the time spent in this location is at most 1. This models the upper bound 1 for performing the instruction flag[pid]=true. In a similar way we have added time bounds to the rest of the model. For arbitrary integer values of the parameters l and c, UPPAAL can establish that the time from when a particular process enters try0 until it enters cs is at most c+10*l. This is done by introducing a local clock y for each process, which is reset whenever the process enters location try0. UPPAAL can then check that:
\[ A[ (P(0).try0 || P(0).try1 || P(0).try2 || P(0).try3) \implies P(0).y <= c+10*l ] \]
This property says that a process stays at most $c+10\times l$ time units in the trying region. Since a process can only leave the trying region by entering the critical section, this implies that a process must enter the critical section after at most $c+10\times l$ time units. If we change the bound to $c+10\times l-1$ then the property no longer holds, and UPPAAL produces a counterexample. This result is consistent with Theorem 10.14 from Lynch [15], which establishes an upper bound of $c+O(l)$. Our result is stronger in the sense that we give a precise upper bound on the number of instructions. The result in Lynch [15] is stronger in the sense that it holds for all values of $c$ and $l$, whereas we have only checked a couple of instances.
UPPAAL is not able to prove general liveness properties for the untimed model of Figure 1, such as the temporal logic formula $P(0). ext{try0} \rightarrow P(0). ext{cs}$ (whenever process $P(0)$ enters the trying region, it will eventually enter the critical section). Such properties can easily be checked using other model checkers such as SPIN [11] and SMV [4, 3]. These tools however miss the graphical user interface of UPPAAL and are not so easy to use for people without any background in formal methods. It would be useful (and not too difficult) to establish a link between UPPAAL and SPIN or SMV that would allow one to model check temporal logic formulas for untimed UPPAAL models.
One should not expect first-year students to come up independently with UPPAAL models such as those in Figures 1 and 3. However, these models
are very helpful to explain the operation of the algorithm through the Uppaal simulator. Students find it easy to understand the models and to modify them in order to answer various questions about the algorithm such as “Is Peterson’s algorithm still correct if we change the evaluation order of the conditions flag[1-pid] and turn == 1-pid?” Also, once students understand the Uppaal model of one mutual exclusion algorithm, they are able to also model other mutual exclusion algorithms. For instance, in less than half an hour most students manage to construct a model of Hyman’s algorithm [12] and to discover using Uppaal why this algorithm is flawed.
3 Semaphores
Semaphores [5] constitute a classic method for restricting access to shared resources. They are widely used in practice and are the primitive synchronization mechanism in many operating systems. Solutions that use semaphores are portable and usually efficient. Even though they have been criticized for being too unstructured and difficult to use, all major textbooks on operating systems discuss semaphores and their use in solving classic problems of synchronization. In this section, we explain how we modeled semaphores in Uppaal, and how the model checker can be used to analyze solutions to synchronization problems.
In Uppaal, transitions between states may be labeled by output actions or input actions. A transition with an output action a! from one automaton may synchronize (occur simultaneously) with a transition with a matching input action a? from a different automaton. A semaphore s is modeled as an automaton that interacts with its environment via three types of synchronization actions: semWait[s][p]?, semSignal[s][p]? and semGo[s][p]!, where p is a process identifier. Figure 4 gives a schematic representation. A semaphore maintains an integer variable count to record the number of shared resources that is still available, and a list queue with names of processes that are waiting. Whenever a process p wants to access a resource protected by s, it performs a synchronization action semWait[s][p]! that synchronizes with a matching action semWait[s][p]? of s. If count is positive, then s will immediately react with a synchronization action semGo[s][p]!, and count is decremented. Upon performing the matching action semGo[s][p]!, process p may access the resource. If count is less than or equal to 0, then process identifier p is stored in queue and count is decremented. A process p releases a resource protected by s via a synchronization semSignal[s][p]!. After a matching semSignal[s][p]? transition, the semaphore increments count. If count was negative before this transition then, in addition, the first process identifier q is removed from queue and activated via an action semGo[s][q]!. We assume that processes are activated in FIFO order.
Figure 5 displays our UPPAAL model of a semaphore. Students do not need to understand the details of the code; they just use the template as a black box when solving synchronization problems. The semaphore template has three parameters: (1) id, the unique identifier of the semaphore, (2) init.val, the initial value of the semaphore, and (3) queue.size, the maximal number of processes in the waiting queue. Since UPPAAL does not support dynamically growing data structures, we need to fix an upper bound on the size of the queue. In our model, the queue is implemented as an array of queue.size. If, due to a semWait, a new element needs to be added to a queue that is full, the automaton jumps to a special overflow location. UPPAAL needs to verify that overflow can not be reached. Since currently in UPPAAL it is not possible to initialize a parametrized array, we need a special transition to do this.\(^2\) By making the initial location “committed” we ensure that the initialization takes place before any other activity in the system. In several transitions we use a select field p : PID. This indicates that we have instances of these transitions for each p in the set PID, that is, for each process identifier.
Note that in our modeling approach the usual semWait(s) operation from the textbooks translates into two consecutive transitions labeled with synchronization actions semWait[s][p]! and semGo[s][p]?, respectively. Each semSignal(s) operation by p is encoded by a transition with synchronization action semSignal[s][p]!.
A UPPAAL model for the binary semaphore is obtained as a small and obvious variation of the general semaphore model.
\(^2\)This imperfection of UPPAAL actually has a positive consequence: due to the extra transition the automaton has a striking resemblance with a bee!
3.1 Producer/Consumer Problem
Now we have models of semaphores, we can start playing with them! Figure 6 shows a model of the incorrect solution to the infinite-buffer producer/consumer problem using binary semaphores, as discussed by Stallings [20] on pages 221-224. The model is obtained in a straightforward manner from the code presented in [20][Fig. 5.9]. As Stallings [20] points out, the problem with this solution is that variable n may become negative, that is, the consumer may consume an item from the buffer that does not exist. By checking the query E<> n<0 with the verifier (“there exists a path to a state in which n<0”), UPPAAL produces a counterexample almost instantaneously. Essentially (modulo permutation of independent transitions), this is the same counterexample as the 21 step example presented by Stallings
in Table 5.3. The ability of UPPAAL to replay counterexamples in the simulator greatly helps in understanding what goes wrong. Note that the model checker is not able to explore the full (infinite) state space of this model.
We also modeled the solution to the bounded-buffer producer/consumer problem with semaphores presented in [20][Figure 5.13]. This model is shown in Figure 7. Stallings [20] claims correctness of this solution, but does not prove it. Even for large values of sizeofbuffer up to 10,000, UPPAAL can prove mutual exclusion and absence of deadlock automatically within a few seconds. After introducing an auxiliary variable buffer that is incremented by function produce() and decremented by function consume(), UPPAAL can prove that always buffer>=0 and buffer<=sizeofbuffer, i.e., the consumer never consumes an item that does not exist, and there is no buffer overflow.
The above solution to the bounded-buffer producer/consumer problem is also presented by Tanenbaum and Woodhull [22]. The authors observe that when the order of the wait operations in the Producer code is reversed there is a deadlock. This observation can easily be verified using UPPAAL; the model for the producer is shown in Figure 8.
Figure 7: Models of consumer and producer for the correct solution to the bounded-buffer case.
Figure 8: The UPPAAL model of the producer for the faulty solution to the bounded-buffer case as noted by Tanenbaum and Woodhull in [22].
3.2 Jurassic Park
UPPAAL is an excellent tool for checking correctness of solutions to concurrency problems. As an illustration, consider assignment 5.11 from [20]:
The following problem was once used on an exam:
Jurassic Park consists of a dinosaur museum and a park for safari riding. There are \( m \) passengers and \( n \) single-passenger cars. Passengers wander around the museum for a while, then line up to take a ride in a safari car. When a car is available, it loads the one passenger it can hold and rides around the park for a random amount of time. If the \( n \) cars are all out riding passengers around, then a passenger who wants to ride waits; if a car is ready to load but there are no waiting passengers, then the car waits. Use semaphores to synchronize the \( m \) passenger processes and the \( n \) car processes.
The following skeleton code was found on a scrap of paper on the floor of the exam room. Grade it for correctness. Ignore syntax and missing variable declarations. Remember that P and V correspond to semWait and semSignal.
```cpp
resource Jurassic_Park()
sem car_avail:=0, car_taken:=0, car_filled:=0,
passenger_released:=0
process passenger(i:=1 to num_passengers )
do true -> nap( int(1000*wander_time)))
P(car_avail); V(car_taken); P(car_filled)
P(passenger_released)
od
end passenger
process car(j:=1 to num_cars)
do true -> V(car_avail); P(car_taken); V(car_filled)
nap( int(1000*ride_time)))
V(passenger_released)
od
end car
end Jurassic_Park
```
Within 15 minutes we translated the above code into UPPAAL (see Figure 9). With the help of UPPAAL it is easy to see that the solution is flawed. For a model with two passengers and two cars, for instance, we established that it may occur that both cars are in the park but one of the passengers is not in his car. In fact, and this is of course the horror scenario, a passenger may be released from the car while he is still in the park.
3.3 Dining Philosophers
The first somewhat more involved example of a synchronization problem that is given in nearly all textbooks, is the problem of the dining philosophers. Dijkstra proposed this problem first [6] as an examination question about a synchronization problem and it surely has become a classic. Five philosophers think and eat in alternation, but in order to eat they need two forks, each of which is shared with a neighboring philosopher at a round table.
Without any precautions, deadlock arises by the sequence of events in which all philosophers pick up their left fork first. After that, they have to wait infinitely long for the other fork: deadlock! Figure 10 shows a UPPAAL model of the naive solution to the dining philosophers problem. The model checker finds the deadlock immediately.
To overcome deadlock, one common approach is to assume the presence of a doorman who only allows four philosophers at a time into the dining room. A model along these lines is shown in Figure 11. In order to prove that each philosopher who wants to eat eventually can do so, we impose an upper bound \( U \) on the time allowed for eating, using a local clock variable \( x \) for each philosopher. We assume that the time needed for the semaphore operations
can be ignored\(^3\). To exclude Zeno cycles\(^4\), we also impose a lower bound on the time needed to eat. With these assumptions, absence of deadlock and the \textit{leadsto} property $\text{Philosopher}_0.\text{try}_0 \leadsto \text{Philosopher}_0.\text{eat}$ are easily shown to hold. In fact we can establish an upper bound of $5*U$ on the waiting time for a philosopher: the property
\[ A[] \text{Philosopher}_0.\text{try}_4 \Rightarrow \text{Philosopher}_0.x \leq B \]
holds for $B=5*U$ but not for $B=5*U-1$. Since clock $x$ is reset upon entering location $\text{try}_0$, this means that a philosopher may have to wait in $\text{try}_0$ for at most $5*U$ time units before being allowed to enter location $\text{eat}$. \textsc{uppaal} proves the upper bound $5*U$ almost instantaneously, and only needs about 2 seconds for the 62 step counterexample for $5*U-1$. Proving the upper bound by hand is hard and way too difficult for the large majority of Computer Science students.
Adding clock variables and timing constraints to the model requires some effort, and advocates of temporal logic may argue that it is much simpler to establish liveness properties with a tool that supports general temporal logic model checking. However, if you are a philosopher knowing that you will be allowed to eat “eventually” is only of theoretical interest! Knowledge of the time bound $5*U$ is useful in practice.
With the above model as a starting point, students may explore further properties. Does it make any difference if we add nondeterminism to
\(^3\)A symbol $U$ in a location indicates that the location is “urgent” and no time may pass if the automaton is in this location.
\(^4\)Infinite sequences of transitions in which time does not advance beyond a certain point.
the model and philosophers may pick up forks in any order? What is the maximal number of philosophers that can eat at any point in time? What happens if we change the number of philosophers? What happens if we no longer ignore the time needed to pick up forks? Etc. Etc.
3.4 The Room Party Problem
A particularly difficult synchronization problem is the “room party problem”, has been defined by Allen B. Downey in his “Little Book of Semaphores” (cf. [8, 9]). The situational sketch is as follows:
A dean of students should keep order in the students’ house. In order to do this, he can enter a room with too many students (in order to break up a too large party) or he can enter an empty room to conduct a search. Otherwise, the dean may not enter a room. If the dean is in a room, no additional students may enter, but students may leave. In that case, the dean has to stay until all students have left. There is only one dean, and no limitation on the number of students in one room. The challenge is to construct code for dean and students such that these constraints are satisfied.
The first solution of Downey, published in [8], is captured in the following Table 1. It employs a mutex to protect the variables students and dean, which denote the number of students in a room and the state of the dean, respectively. The other two semaphores clear and lieIn are used as rendezvous between a student and the dean.
<table>
<thead>
<tr>
<th>dean code:</th>
<th>student code:</th>
</tr>
</thead>
<tbody>
<tr>
<td>mutex.wait()</td>
<td>mutex.wait()</td>
</tr>
<tr>
<td>if students > 0 and students < 50:</td>
<td>students += 1</td>
</tr>
<tr>
<td>dean = 'waiting'</td>
<td>if students == 50 and dean == 'waiting':</td>
</tr>
<tr>
<td>mutex.signal()</td>
<td>lieIn.signal() # and pass mutex</td>
</tr>
<tr>
<td>lieIn.wait() # and get mutex</td>
<td>else:</td>
</tr>
<tr>
<td># students must be 0 or >= 50</td>
<td>mutex.signal()</td>
</tr>
<tr>
<td>if students >= 50:</td>
<td>party()</td>
</tr>
<tr>
<td>dean = 'in room'</td>
<td>mutex.wait()</td>
</tr>
<tr>
<td>breakup()</td>
<td>students -= 1</td>
</tr>
<tr>
<td>mutex.signal()</td>
<td>if students == 0 and dean == 'waiting':</td>
</tr>
<tr>
<td>clear.wait() # and get mutex</td>
<td>lieIn.signal() # and pass mutex</td>
</tr>
<tr>
<td>else: # students = 0</td>
<td>elif students == 0 and dean == 'in room':</td>
</tr>
<tr>
<td>search()</td>
<td>clear.signal() # and pass mutex</td>
</tr>
<tr>
<td>dean = 'not here'</td>
<td>else:</td>
</tr>
<tr>
<td>mutex.signal()</td>
<td>mutex.signal()</td>
</tr>
</tbody>
</table>
Table 1: The first solution of Downey to the room party problem.
It turns out that this solution does not prevent students from entering the room when the dean is there to break up a party. The models for the above descriptions are given in Figure 12. Analysis reveals a trace of some 20 steps that shows how the dean has to release the `mutex` after breaking up the party without being able to prevent students to enter the room.

This problem was actually found by student Marc Schoolderman during his assignment work as a part of his course on operating systems. He also proposed an alternative solution and showed this obeyed the required properties with the aid of UPPAAL. A discussion with the author however resulted in yet another proposal from Downey's side, published in [9]. A turnstile `turnstile` is added to the code in the Table 2, specifically designed to keep students from entering while the dean is in the room.
But, alas, also this model does not satisfy the required property mentioned above. A trace (of 64 steps in this case) shows that one student may have received and released the turnstile to enter, but still is waiting for the
mutex, which he gets from the dean while the latter is still in the room. Such counterexamples are hard to find just by looking at the code. The Uppaal model with which this analysis was done, is shown in Figure 13. Note that the structure of the code of Downey is very well visible in the Uppaal model.
Students participating in our course discovered several other mistakes in [8], simply by modeling and analyzing proposed solutions from the book using Uppaal. The author uses semaphores in a very structured manner, using solutions for basis synchronization patterns, and we do not think that these problems could easily have been avoided using different synchronization primitives. Our conclusion is that the intrinsic complexity of these synchronization problems requires the use of formal methods tools such as model checkers to ensure correctness of solutions.
### 4 Monitors
The monitor was introduced in the 70s by Hoare [10] as an alternative programming-language construct that provides equivalent functionality to that of semaphores and that is easier to control. A number of programming languages, such as concurrent Pascal and Java, have implemented monitors. The basic version of Hoare was refined by Lamport and Redell [14] in the 80s. A monitor is a software module that consists of a number of proce-
<table>
<thead>
<tr>
<th>Dean code:</th>
<th>Student code:</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>mutex.wait()</code></td>
<td><code>mutex.wait()</code></td>
</tr>
<tr>
<td>if students > 0 and students < 50:</td>
<td>if dean == 'in room':</td>
</tr>
<tr>
<td>dean = 'waiting'</td>
<td>mutex.signal()</td>
</tr>
<tr>
<td>mutex.signal()</td>
<td>turn.wait()</td>
</tr>
<tr>
<td>lieln.wait() # and get mutex</td>
<td>turn.signal()</td>
</tr>
<tr>
<td># students must be 0 or >= 50</td>
<td>mutex.wait()</td>
</tr>
<tr>
<td>if students >= 50:</td>
<td>students += 1</td>
</tr>
<tr>
<td>dean = 'in room'</td>
<td>if students == 50 && dean == 'waiting':</td>
</tr>
<tr>
<td>breakup()</td>
<td>lieln.signal() # and pass mutex</td>
</tr>
<tr>
<td>turn.wait() # lock turnstile</td>
<td>else:</td>
</tr>
<tr>
<td>mutex.signal()</td>
<td>mutex.signal()</td>
</tr>
<tr>
<td>clear.wait() # and get mutex</td>
<td>party()</td>
</tr>
<tr>
<td>turn.signal() # unlock turnstile</td>
<td>mutex.wait()</td>
</tr>
<tr>
<td>else:</td>
<td>students = 1</td>
</tr>
<tr>
<td># students = 0</td>
<td>if students == 0 && dean == 'waiting':</td>
</tr>
<tr>
<td>search()</td>
<td>lieln.signal() # and pass mutex</td>
</tr>
<tr>
<td>dean = 'not here'</td>
<td>else:</td>
</tr>
<tr>
<td>mutex.signal()</td>
<td>clear.signal() # and pass mutex</td>
</tr>
<tr>
<td></td>
<td>else:</td>
</tr>
<tr>
<td></td>
<td>mutex.signal()</td>
</tr>
</tbody>
</table>
Table 2: The second solution of Downey to the room party problem.
dures, some initialization and local data. Processes can enter the monitor by invoking one of the procedures, while only one process may be executing in the monitor at any time. Other processes that have invoked the monitor are blocked until the monitor becomes available. Each procedure has the following structure:
```plaintext
return_structure procedure(invoke_variables)
{
if condition(this_procedure) then wait(my_condition);
execute procedure;
update conditional variables;
notify appropriate conditions;
}
```
Lampson and Redell refined this model by replacing the if statement by a while statement and the notify by a broadcast. This renders the
Figure 13: UPPAAL model of Downey’s room party problem, version 2.
monitor much more robust against missing events and makes the procedures much more independent of each other, because they don’t have to know which conditions to trigger precisely. It comes at the cost of more iterations, but their number is manageable (cf. [14]).
We have modeled the monitor as shown in Figure 14. The conditions on the procedures and their updates appear in the UPPAAL template as the two functions CondTrue and condUpdate, which are both model-dependent. There are two possible transitions from the central "standby" state, one being the reception of monitor invocations, which puts the calling process at the end of the queue to be handled, the other being the handling of the processes themselves, which is enabled by CondEval(). If this guard yields true, the transition is made urgently, because urg! denotes an urgent broadcast channel (to which no-one listens). The first process in the queue that is enabled, is taken of the queue, is executed, and its corresponding conditional variables are updated through condUpdate. The last notification statement of the Lampson and Redell monitor is taken into account by the CondEval() evaluation each time the central state is entered.
Figure 14: Model of a monitor.
4.1 Dining Philosophers Revisited
Several textbooks present a solution to the classical dining philosophers problem with a monitor. Stallings does this in [20], but also Nutt in a modern perspective on Operating Systems [17] and Silberschatz and Galvin in Operating System Concepts [19]. In the last two books, the presented solution involves a test procedure that is not side-effect free, an objectionable way of programming by itself, cf. Figure 9.11 at page 230 of [17]. Moreover,
both mention that the solution is deadlock free, but not starvation free, and leave finding the solution to the latter problem as an exercise to the reader.
Figure 15: Template of a philosopher for the UPPAAL model of Nutt’s solution with a monitor to the dining philosopher problem.
meta int eaters = 0;
const int thinking = 0, eating = 1;
int status[N];
bool test(int p) {
return ( (status[(p+N-1)%N] != eating) && (status[(p+N+1)%N] != eating) );
}
bool condEval(int p, int t) {
if ( t==pickUpForks ) { return test(p); }
if ( t==putDownForks ) { return true; }
}
void condUpdate(int p, int t) {
if ( t==pickUpForks ) { status[p] = eating; eaters++; }
if ( t==putDownForks ) { eaters--; status[p] = thinking; }
}
Table 3: Model dependent code in UPPAAL model of dining philosopher.
In Figure 15 the philosopher part of the model is shown. Table 3 shows the model dependent code in the monitor template. As one can see, this is close to Nutt’s solution, but the condition test has been made side-effect free, while the notifications are automatic by the return to the central state in the monitor template. The query
Philosopher(0).THINK-->Philosopher(0).EATING
indicates a trace to the starvation problem immediately.
A possible solution to the starvation problem involves the introduction of a doorman, as explained for instance by Downey [8] in terms of semaphores. The model is easily extended as shown in Figure 16 and the extension of the code in Table 4. The liveness property (absence of philosopher starvation) is readily checked with UPPAAL.
Figure 16: Template of a philosopher with the introduction of a doorman in the solution.
```java
bool condEval(int p, int t) {
if ( t==pickUpForks ) { return ( test(p) ); }
if ( t==putDownForks ) { return true; }
if ( t==doorman ) { return ( 2*eaters < PHIL ); }
}
void condUpdate(int p, int t) {
if ( t==pickUpForks ) { status[p] = eating; }
if ( t==putDownForks ) { eaters--; status[p] = thinking; }
if ( t==doorman ) { eaters++; }
}
```
Table 4: Model dependent code in UPPAAL model for dining philosopher with doorman.
5 Conclusions and Related Work
Lamport [13] asks: “Programs are not released without being tested; why should algorithms be published without being model checked?” Similarly, we conclude “Why should algorithms be explained without the use of a model checker?” As discussed in this article, key advantages of using model checkers are: (a) unambiguous definition of algorithms and their properties, (b) visualization of concurrent behavior, and (c) fully automatic proof of
correctness properties. Model checking technology has become easy to use and sufficiently powerful to handle nontrivial instances of all the concurrent algorithms that are typically discussed in introductory courses on operating systems or concurrent programming. The behavior of these algorithms is tricky, and authors, instructors and students should simply not trust solutions that have not been model checked. However, we emphasize that key elements for successful use of a model checker with first-year students are (a) the availability of a powerful graphical user interface for editing and simulation, and (b) a smooth and short learning curve. Here UPPAAL clearly stands out.
Mutual exclusion algorithms are popular benchmark examples for model checkers, see for instance [3], and the analysis results of this article are not new, except for the time bound for Peterson's algorithm. Our results on model checking semaphores and monitors are new, to the best of our knowledge. In this article we have not described UPPAAL models of the use of message passing as a synchronization primitive; adding this would be routine.
Closely related to our work is the book of Magee and Kramer [16]. This book provides a nice approach to concurrent programming using state models and Java. State models are described in a textual, process algebraic language called FSP and can be visualized and analyzed using an LTL model checker called LTSA. The consistent combination of state models and Java makes their approach ideal for a course on concurrent programming. Via the use of Java applets, the authors offer appealing visualizations of concurrent behavior, in addition to the visualization of state machines offered by LTSA. The FSP language, however, is much less expressive than the UPPAAL language, and for instance does not support shared variables. This makes it less straightforward to handle mutual exclusion algorithms, like we did in Section 2. Also, the EFSM graphical notation of UPPAAL even allows one to visualize the behavior of complex industrial sized models, whereas only relatively small models can be visualized using LTSA. Magee and Kramer [16] present a model of semaphores which, in our opinion, is overly abstract: a wait operation is modeled by a single transition (rather than with a pair of a semWait and semGo transition) and information about the order in which processes have been blocked is not preserved. Typically, liveness and real-time properties of concurrent algorithms crucially depend on the order in which processes that are blocked on a semaphore are activated again. Implementations usually adopt a FIFO order. This means that in the approach of Magee and Kramer [16] it is, for instance, not possible to prove liveness or real-time properties for the solution of the dining philosophers with a doorman, like the 5*U bound we derived in Section 3.3.
As a spin-off, using model checkers in an introductory course also provides a great opportunity to increase the impact of formal methods research. More students will learn about and appreciate model checking technology.
Once students have seen how useful these tools are, they will much faster decide to use them later on when facing similar problems. The more theoretically inclined students become motivated to study the algorithms behind model checkers.
**Acknowledgments** We like to thank our students for their enthusiasm and help with constructing models, in particular Justus Freijzer, Martijn Hendriks, Bart Kerkhoff, Bart Meulenbroeks, Marc Schoolderman and Koen Vermeer.
**References**
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/36659/36659.pdf?sequence=1", "len_cl100k_base": 9500, "olmocr-version": "0.1.49", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 48542, "total-output-tokens": 11490, "length": "2e13", "weborganizer": {"__label__adult": 0.0005021095275878906, "__label__art_design": 0.0005178451538085938, "__label__crime_law": 0.0005006790161132812, "__label__education_jobs": 0.006496429443359375, "__label__entertainment": 0.00012814998626708984, "__label__fashion_beauty": 0.00023233890533447263, "__label__finance_business": 0.00027370452880859375, "__label__food_dining": 0.0005750656127929688, "__label__games": 0.0009446144104003906, "__label__hardware": 0.0013790130615234375, "__label__health": 0.0007305145263671875, "__label__history": 0.00046944618225097656, "__label__home_hobbies": 0.0001697540283203125, "__label__industrial": 0.0007033348083496094, "__label__literature": 0.0007128715515136719, "__label__politics": 0.0005011558532714844, "__label__religion": 0.0008382797241210938, "__label__science_tech": 0.068603515625, "__label__social_life": 0.0002300739288330078, "__label__software": 0.00585174560546875, "__label__software_dev": 0.90771484375, "__label__sports_fitness": 0.0004284381866455078, "__label__transportation": 0.001300811767578125, "__label__travel": 0.0002789497375488281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44127, 0.0203]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44127, 0.37492]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44127, 0.90471]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1753, false], [1753, 4886, null], [4886, 7752, null], [7752, 9898, null], [9898, 11404, null], [11404, 12724, null], [12724, 14314, null], [14314, 17164, null], [17164, 18977, null], [18977, 19812, null], [19812, 21279, null], [21279, 23275, null], [23275, 24550, null], [24550, 26331, null], [26331, 28568, null], [28568, 29768, null], [29768, 32083, null], [32083, 32823, null], [32823, 34547, null], [34547, 35796, null], [35796, 37153, null], [37153, 40263, null], [40263, 42272, null], [42272, 44127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1753, true], [1753, 4886, null], [4886, 7752, null], [7752, 9898, null], [9898, 11404, null], [11404, 12724, null], [12724, 14314, null], [14314, 17164, null], [17164, 18977, null], [18977, 19812, null], [19812, 21279, null], [21279, 23275, null], [23275, 24550, null], [24550, 26331, null], [26331, 28568, null], [28568, 29768, null], [29768, 32083, null], [32083, 32823, null], [32823, 34547, null], [34547, 35796, null], [35796, 37153, null], [37153, 40263, null], [40263, 42272, null], [42272, 44127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44127, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1753, 2], [1753, 4886, 3], [4886, 7752, 4], [7752, 9898, 5], [9898, 11404, 6], [11404, 12724, 7], [12724, 14314, 8], [14314, 17164, 9], [17164, 18977, 10], [18977, 19812, 11], [19812, 21279, 12], [21279, 23275, 13], [23275, 24550, 14], [24550, 26331, 15], [26331, 28568, 16], [28568, 29768, 17], [29768, 32083, 18], [32083, 32823, 19], [32823, 34547, 20], [34547, 35796, 21], [35796, 37153, 22], [37153, 40263, 23], [40263, 42272, 24], [42272, 44127, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44127, 0.14607]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
413a6630f07c3d0e4122eb1e1112e0db6223ea22
|
Report
Design and evaluation of spontaneous container services
Author(s):
Popovici, Andrei; Alonso, G.
Publication Date:
2002
Permanent Link:
https://doi.org/10.3929/ethz-a-006654476
Rights / License:
In Copyright - Non-Commercial Use Permitted
This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use.
Design and Evaluation of Spontaneous Container Services
Technical Report Number 368
A. Popovici, G. Alonso
Department of Computer Science
Swiss Federal Institute of Technology (ETHZ)
ETH Zentrum, CH-8092 Zürich, Switzerland
Abstract
Technologies like Jini offer spontaneous discovery and utilization of services. Container technology (e.g., Enterprise Java Beans) allows the transparent adaptation of the application logic at deployment time. These two approaches tackle different sides of the same problem: how to cope in a flexible manner with dynamic changes in the environment where the application runs. Combining the two approaches leads to a virtual container service infrastructure in which services or functionality extensions can be dynamically added to or removed from an application as needed. This is particular important nowadays given the increasing pervasiveness of wireless and ad-hoc networks as well as peer-to-peer interaction. In this paper we discuss how this can be done in the context of Java. The system we present effectively and efficiently transforms a Java spontaneous network into a virtual container for services and extensions, thereby providing the advantages of both approaches in a single platform. As a test case, we use this infrastructure to implement important data management functionality: transactional interaction, access control, and container managed persistence, and show how it can be used in an ad-hoc network environment.
Unfortunately, the hardware and network platforms underlying future information systems are evolving much faster than the software infrastructure. New computing environments such as spontaneous or ad-hoc networks and peer-to-peer interaction challenge even the flexibility provided by container and dynamic service discovery models. In these new environments, nodes can independently join and leave a community of services. Nodes do not know in advance with which other nodes they will interact. One cannot assume a pre-defined environment, e.g., the Internet, as nodes build their own network on the spot. This makes it very difficult to rely on existing hardware or software infrastructure. As a response to this limitations, ambitious middleware platforms, which promote adaptation, have emerged. R-ORB [YK01] is a context sensitive object request broker based on reconfigurable hardware. In the context of Java, [SGGB99] aims at a network-wide virtual machine that defines dynamic service components (e.g., monitoring or security).
In spite of these efforts, the appropriate information system support for such new computing environments is largely missing. A flexible software infrastructure is needed, in which functionality should be factored out of an application to become a dynamic property of the computing environment. By “computing environment”, we mean any set of two or more applications that decide to interact with each other and might build their own communication network for that purpose. When an application joins a new computing environment, its functionality should be extended and/or adapted as needed. The novelty of this idea is to see the computing environment as the provider of such extensions (rather than, e.g., a static container). Thus, the nodes that make up this ad-hoc computing environment should be able to spontaneously generate any service infrastructure they might need.
In this paper we present a Java-based service infrastructure capable of run-time service adaptation. The system we have built effectively transforms a Jini
1 Introduction
Networking environments that either avoid a fixed infrastructure or allow direct, spontaneous interactions between peers are rapidly becoming available for practical use. To support these new computing environments, it is crucial to abandon the current paradigm in which software capabilities are determined at development time. To a certain extent, this change is already happening. For instance, modern container models [Mic01a] separate the business logic from the system components. Through this separation, key functionality such as transactions, persistence, or security can be transparently added to the application at deployment time rather than having to be implemented as part of every application. Similarly, the increasingly widespread use of wireless networks is leading to another form of adaptation: dynamic service discovery of the type found in systems like Jini [AWO+99].
network into a dynamic, secure service environment functionally equivalent to an Enterprise Java Beans (EJB) container providing container managed persistence, access control and transactional interaction. We concentrate on this functionality because it plays a crucial role in any information system and provides an excellent test-bed to better understand the problems associated with these new computing environments. Using this infrastructure, nodes can dynamically acquire extensions that make their state persistent (the state being stored at a base station or at another node), their interactions transactional (with arbitrary levels of nesting and interacting with either base stations or directly among them), and subject them to an access control policy (to access information in other nodes or at base stations).
These extensions allow the formation of ad-hoc information systems in an entirely spontaneous manner. The resulting architecture has many advantages. It allows the interoperability of mobile and fixed network applications. The container model is an accepted and well understood technology. We believe that this is a significant advantage, because it does not require a particular hardware setting [YK01] or a network-wide virtual machine [SGGB99]. By combining discovery and service adaptation, the approach becomes applicable in a wide range of scenarios from the conventional (e.g., application deployment or code upgrades) to the most advanced (e.g., ad-hoc networks or peer-to-peer interactions over wireless networks).
In the paper, we first motivate the work by providing an example scenario that extends existing commercial systems (Section 2). Then we describe the architecture and how functionality can be added at run-time to applications running on a JVM (Section 3). Based on this architecture, we have implemented a spontaneous container providing container managed persistence, access control and transactional interaction. We explain how the container works by discussing every necessary extension step by step (Section 4). We have developed extensions that demonstrate the potential of the approach and what is involved in building a spontaneous information system. We also provide an extensive experimental study of the resulting system as a first step towards identifying the problems that need to be solved to make spontaneous information systems a reality. We discuss these results and conclude the paper in Section 5.
2 Motivation
2.1 Example scenario
Imagine a conference, trade show, meeting, or exhibition hall where participants are provided with computing devices such as lap-tops or PDAs (we will refer to these devices from now on as nodes)\(^1\). Assume there is a common infrastructure that allows service publishing and discovery (e.g., Jini running on a combination of fixed and wireless network). Nodes can communicate with either a base station or directly with each other. Using this ad-hoc platform as the basis, we want to provide the same services offered by a conventional middleware infrastructure. In particular, we would like to be able to provide the basic functionality found in a container (i.e., persistence, security and transactional interaction) but in a completely ad-hoc manner, that is, without relying in any centralized infrastructure. That is, nodes should be able to exchange among themselves whatever functionality extensions they need and, after that, be capable of interacting transactionally among them, have their state persistently stored somewhere (other nodes or a base station), and follow a simple access control policy.
Important requirements are:
(a) The nodes carry with them only the basic platform support for run time adaptation,
(b) the functionality extensions can be provided by any other node including, but not limited to, base stations,
(c) functionality extensions are only leased; if a node leaves the computing environment the extension disappears, and
(d) functionality extensions can be dynamically updated as policy changes occur without having to interrupt service or having to modify the nodes.
2.2 Target architecture
One way to implement such a scenario is to treat each node (i.e., each service made available by a node) as a component that can be extended at run-time with the necessary functionality. The functionality extensions can be obtained either from a base station or from other nodes upon joining the computing environment. Later on, the node might acquire additional extensions or distribute extensions to other nodes as needed. Upon leaving the network, all extensions are removed. Examples of such extensions are:
A context extension that modifies the communication layer so that context information is added to service calls. This context information will then be used as the basis for access control and transactional interaction.
A persistence extension that maps the local state of a service (e.g., an order received by a merchant) to stable storage located into some other node or base station. This same extension restores the state of the service after a crash.
\(^1\) A similar scenario focusing on basic communication services such as messaging and person identification is already being commercially exploited with over a thousand nodes being connected: http://www.shockfish.com/.
A transactional extension enforces transactional correctness. This extension installs a mini transactional monitor in each node so that invocations to services are trapped and made transactional.
Additional extensions dealing with authentication, access control, or billing are added as needed depending on location, circumstances, number of participants, etc.
2.3 Related work
There has been a significant amount of work on concrete aspects of information systems in, or in connection with, mobile computing environments (e.g., indexing, caching, update consistency, etc.). To our knowledge, much less has been done in the area of software system architecture and infrastructures for information systems in these environments. Nevertheless, some products are starting to appear although still without supporting run-time adaptation. For instance, there are products that export the functionality available in conventional application servers as Jini services [Tec02] so that this functionality can be accessed by mobile devices. However, this is just a matter of interface adaptation and not the run-time adaptation we are proposing.
The work most relevant to what we propose are recent results in software engineering, namely on Aspect Oriented Programming (AOP) [KLM+97]. The basic idea of AOP is to abstract out of an application all orthogonal concerns (functionality) so that they can be treated separately [TOHS99]. Typical examples are distribution, security and logging when this functionality cuts across the system, i.e., it is not located in a single decomposition unit (e.g., class or package). Once this separation has been made, AOP techniques are used to combine the application with the orthogonal concerns when the application is compiled. This process is called weaving and is based on crosscuts, i.e., collections of points in the execution of a program where some additional functionality should be invoked. In AspectJ [XC01, LK98], for instance, these crosscuts could be the invocations of some method(s) of a set of classes.
There exists as well some initial work on adaptive middleware based on AOP and reflection [CBCP01, APW01]. The idea there is to perform QoS adaptations of the CORBA service layer in response to changes in the run-time environment (e.g., network resources) [TJJ00]. This work, however, addresses only conventional middleware platforms not the type of ad-hoc networks we are exploring.
Finally, run-time changes to a program are usually performed in languages that explicitly support run-time adaptability such as composition filters [AWBB94]) or reflection [KdR91, IYL95, KG97, OSM+00]. More recently, AOP ideas have been proposed and implemented as part of the PROSE (PRO-
3 Spontaneous container architecture
In this section we first briefly discuss PROSE, the language and run-time system used to develop the spontaneous container architecture. We then show how to use PROSE to perform container-like adaptation so that applications can be extended with the necessary functionality. Finally, we show how to extend every node so that it can exchange, provide, and receive extensions at run-time, thereby providing the basic mechanism for spontaneous interaction.
3.1 Language and run-time support
For reasons of space and scope, we will not discuss PROSE in detail. For the purposes of this paper, PROSE is just a run-time adaptation system. The interested reader can find more information about it recent publications [PGA02, PGA01]. Here we provide a brief overview of its basic mechanisms so that the programming context can be understood.
PROSE is based on a modified JVM that can perform interception and weaving at run-time. However, an application will not see any difference with a standard JVM. The PROSE JVM provides an API for inserting and removing extensions. These extensions specify the code for the extension (what to do) and where it needs to be called (when to do it). Typical examples are:
- Invoke the extension txBegin before entering any method whose name matches the regular expression "\.*tx\.*" or involves a remote method invocation (RMI).
- Invoke the extension storeNewValue whenever a field is modified in any class whose name matches the regular expression "\.*EntityBean\.*"
PROSE extensions are regular Java objects that can be sent over the network. Signatures are used to guarantee their integrity. PROSE is independent of the programming language it concentrates on the execution inside the JVM and not on the source compiled to produce the byte-code. Once an extension has been inserted in the JVM, any occurrence of the events of interest results in the execution of the corresponding extension. If an extension is withdrawn from the JVM, the extension code is discarded and the corresponding interception(s) will no longer take place.
Requiring all applications to run on Java is currently a limitation. This constraint becomes less critical as similar technology spreads (e.g., the .NET platform). The same type of interception could be done in other platforms (namely, in .NET). In fact, with the appropriate compiler support, aspect-oriented runtime changes can be introduced in any application.
In the architecture we distinguish between two types of adaptations. These adaptations use the same PROSE mechanisms but they play different roles. The first type is the extension functionality. The second type is needed to intercept events of the application and connect the application logic to the extension functionality. We denote this second type of adaptation extension glue and the points where it must be added join-points. Every extension we employ contains a functionality and a glue part.
3.2 Adapting functionality through extensions
Consider three nodes, A, B, and C that want to interact as follows. Node A initiates a distributed computation within method m_A (Figure 1.a). It first performs some local operations and then invokes method m_B of the remote service B (step 1). The computation in m_B changes the state of the service B at a place denoted in the figure by “*”. m_B completes, and the results are transferred back to A (step 2) where additional operations are performed locally by A (step 3). The remaining code of m_A involves a second remote call to m_C, carried out in a similar manner (steps 4 and 5). Assume now that these nodes have acquired extensions that will enhance the computation just described with context management (CM), transaction management (TM), container managed persistence (CMP) and access control (ACM).
Figure 1.b shows the control flow once the extensions are in place. As before, the computation starts in m_A. When the remote invocation to m_B is initiated, the TM extension glue traps the call (step 1, dark gray). The extension invokes the TM functionality to create the necessary transactional context (e.g., a transaction identifier). The transactional context and A’s identity are transmitted to B as implicit context (since they do not exist in the signature of m_B). The CM extension (double hatched), located in the communication layer, marshals the implicit context data together with the parameters of the call (step 2), before the invocation takes place (step 3).
At node B, the local CM extension detaches the context data (3) and associates to it the current thread of execution. Before the application logic in m_B starts executing, a number of things happen. First, the ACM extension is invoked (simple hatched) to check whether A has the right to access m_B (step 4). If access is granted, the TM extension of B (dark gray) is invoked to keep track of the transactional context and to start m_B as a local transaction (step 5). During the execution of m_B, a state change occurs (•). The CMP extension glue intercepts the state change and notifies the object-relation mapper (step 6) that will perform the corresponding update in the corresponding database. When the execution of m_B is completed, the TM extension is invoked once again (step 7) to pre-commit the changes carried out in step 6. If TM at B produces any context information that must be shipped back to A, the CM extension marshals this data together with the return of the call (step 8). At A, any context information is extracted from the return values, (step 9, double hatched) and passed to the TM extension for processing (step 10). After that, execution of m_A resumes with analogous steps for the call to m_C.
This scenario illustrates the adaptations needed to transform a single node of a spontaneous network into a mini container. If each node is extended in a similar manner, all services will behave as if they would have been deployed in a virtual container.
3.3 Creation of extensions
Extensions can be written knowing or not knowing the source code they will extend. An example of a very useful extension that can be written without knowing the source code is an encryption extension. Such an extension would intercept any incoming or outgoing RMI call to one application and perform the necessary encryption or decryption. Extensions can also be written knowing only the published interface of an application by using the method name, the class name, the signature, or even the parameters to specify where to intercept the execution. An example of what can be done using this information would be a logging extension that creates a log record in some remote database every time a given service is called. Finally, extensions can be written with full knowledge of the source code. In this case, extensions are for software maintenance and upgrade purposes.
The adaptation mechanism described above supports any of these three type of extensions. For reasons of space, however, we cannot discuss the software engineering issues involved and when to use each type of extension. It suffices to explain how such extensions can be created in the context of a spontaneous container. This has been done by extending PROSE with support for the equivalent of the deployment descriptors used in EJB architectures [Mic01a]. This support comes in the form of a meta-data repository (network meta-data in Figure 2.b) that specifies, in a declarative fashion, what types of services are expected to be adapted and in which way. For instance, it might say that the encryption extension is to trap all RMI calls and encrypt them; a QoS extension is to trap outgoing RMI calls and cancel them if there is not enough bandwidth available; a load balancing extension is to send requests to different servers as dictated by the current load; a billing extension is to trap calls to a specific service and generate a charge for its use.
This information, which needs to be provided by the system’s programmer, is used by the extension base to generate extension functionality and glue from these specifications. The resulting PROSE extension object [PGA02] can be sent through the network.
3.4 Management of extensions
Our approach is fully interoperable with Jini. In addition, all nodes can send extensions to other nodes and can act as lookup services (LUS), if necessary. For simplicity in the exposition, however, we will assume the extensions are being sent from one base station (the same procedure can be performed by any node) and that a lookup service is available.
Figure 2.a illustrates the basic mechanism used to discover and use new services in Jini. A service (B) joins the computing environment by registering a proxy pB at a nearby lookup service (step 1). Other nodes can query the LUS (step 2), obtain pB and use the service B (step 3). Alternatively, nodes can ask to be notified when a service matching a certain template joins or leaves the Jini community.
Our spontaneous container architecture extends this basic idea as follows. We define an adaptation service, based on PROSE, that allows the uploading of extension objects into a node. Each node must carry an adaptation service, which is accessible as a regular Jini service. Figure 2.b illustrates service B running together with an adaptation service (marked in the figure with PROSE) on the same JVM. Like
any other service, the adaptation service joins the Jini community (step 1) and allows other nodes to use its interface. The extension base is continuously scanning the network for new adaptation services (step 2). Once a new adaptation service is discovered, a customized set of extensions objects is sent to it (step 3). The immediate effect is for the extensions functionality to be instantiated at B (step 4')) and the monitoring of the corresponding joint-points activated (4")).
Using the leasing mechanism provided by Jini [AWO+99], the extension object sent to each node is actually leased. Consequently, when a node leaves or is unplugged from the network, the leases keeping the extension alive fail to be renewed. When this occurs, the instantiated extension is discarded and the glue functionality is dynamically extracted out of the join-points.
4 Spontaneous container evaluation
In this section, we complete and clarify the architectural description by discussing step by step the creation of a spontaneous container with the four extensions mentioned above (CM, TM, CMP, and ACM). We also include performance measurements to give a clear idea of the costs involved and where optimizations are needed. This container has been implemented as a prototype and it is being extensively used to deploy novel applications over ad-hoc networks.
For the extensions we have used standard libraries (ACM), developed some parts as needed (CMP and CM), and used a commercial product (TM). We discuss only the aspects of the extensions to the extent that are important to understand how they can be embedded within the architecture. However, there are many ways to implement this functionality and the ones we use are just an example of how to go about it.
4.1 Experimental setup
To evaluate the performance we use a varying configuration with three layers of nodes (Figure 3.a). At level zero, nodes act as clients invoking the services implemented at level one. Clients send a request at a time but do not have idle time. As soon as a response arrives, the next request is sent. A number of $k$ clients ($C_{i1}, \ldots, C_{ik}$) use concurrently the same service on level one ($L_{1i}$). Similarly, the services at level one call services at level two. Each service $L_{1i}$ (there are $n$ such services) does a sequence of remote invocations to the services $L_{21}, L_{22}, \ldots, L_{2m}$ (there are $m$ such services) at level two. On both levels, each service call performs a number of local operations that update the data structure shown in Figure 3.b. A local operation iterates over the elements in the orders list and updates the state of each TestOrder.
Every experimental configuration is characterized by the tuple $(n, m)$. For the purposes of this paper, we considered all configurations $(1, 1)$, $(2, 2)$, $(3, 3)$, $(4, 4)$, $(5, 5)$. For each configuration, we varied the number of clients $k$ until the maximal throughput is reached and then measured the throughput and the response time. The throughput is the average number of invocations per second (inv/sec) performed by all clients. The response time is the average time recorded to complete a call at the client level. With this, the analysis is a worst case scenario in that the load across all nodes is kept artificially high. The idea is to get measurements that act as lower bounds, since in practice operations are likely to run in disjoint subsets of nodes and therefore be less demanding in terms of the resources they need.
To avoid changes of bandwidth, connectivity and availability, we perform all experiments using a cluster of PCs on a local area network (Table 4.1). All service invocations across levels are remote calls and all services reside in different nodes. The nodes are more powerful than high end PDAs but equivalent to lap-tops. During the experiments, the systems were not loaded with user activity. The network provides more bandwidth than wireless networks. The reason to use a LAN is that the type of stress test we conduct cannot be done in current wireless networks due to their low capacity. In a wireless environment, the reduced network bandwidth will be compensated by the fact that a node will generate neither nearly as much traffic nor so complex service invocations as those used in the tests.
4.2 Performance with plain Jini
As a base line for the measurements, we run a series of tests with no extensions involved. Essentially, we are measuring the overhead of Jini and of making remote calls. Figure 4.a illustrates the throughput and Figure 4.b the response time for all configurations. The throughput for the configurations (1,m) is smaller than for the other configurations (2,m) ... (5,m) indicating that a single node on level one is a performance bottleneck in the test performed.
4.3 1st Extension: implicit context
In any distributed system, implicit information must be transparently attached to the parameters of the call at the caller’s side and be detached at the callee. Context is transferred in the opposite direction from the callee to the caller together with the return values. Using this mechanism, non-functional information like authentication tokens or transaction identifiers can be transferred between peers.
For this purpose, the extension base distributes the CM extension, which replaces the communication layer of existing services with a new communication layer capable of transferring additional data on the same network connection. The new communication layer checks for every connection whether implicit context must be sent or received from the peer. This functionality does not use the interception mechanism of the PROSE system.
To measure the efficiency of the new communication layer, we distribute CM to all nodes. Then we run the test application when no implicit context data is transferred between peers. The observable performance decrease corresponds the handshakes incurred by the new communication layer. Figure 5.a illustrates the throughput of the test system. In all cases, the total number of invocations per second is down by roughly one third compared to Figure 4. In Figure 5.b, the total height of the bars represents the response time, in seconds, of all (1,m) configurations (corresponding to the leftmost group of measurements in Figure 5.a). The dark gray part represents the time spent in the new communication layer. With no implicit context data generated by the application, the response time increases by approximately 0.03 seconds.
4.4 2nd Extension: implicit context and persistence
In the application server area, standards that promote transparent persistence have emerged. A good example is the container managed persistence promoted by EJB [Mic01a], but more complex models (e.g., JDO [Mic01b]) have been proposed. Their benefits have been analyzed elsewhere [AM95, ADJ+96].
For securing the state of Jini services, a similar approach would be beneficial. Following this idea, we have developed a solution adapted to the dynamic character of Jini. It consists of an object-relational mapper (ORM) and relies on capturing object field changes at run-time using PROSE. The component is
small (100KBytes) and can save, restore and update the state of entire (Java) object graphs.
1. <persistent_service>
2. <package_name>ch.ethz.inf.midas</package_name>
3. <class_name>MyJiniBean</class_name>
4. <primary_key>serviceID</primary_key>
5. <persistent_field>name</persistent_field>
6. <persistent_field>foo.*</persistent_field>
7. </persistent_service>
Figure 6: Network meta-data specifying container managed persistence for MyJiniBean.
The installation of the ORM is performed by the CMP extension. The CMP extension inserted into the weaver contains database connectivity parameters and mappings specific to the current network container. After the instantiation, the mapper searches the memory object space of the Jini node. If services, identified by their globally unique Jini IDs [AWO+99], are known to the local database, ORM attempts to restore their state. If not, it attempts to store the transiently reachable closure of objects in the local database.
Finally, the ORM inspects once again the object space and discovers all fields that have to be synchronized with the database. For all these fields, it installs the PROSE extension glue that reports their modification. The specification of the fields to be watched is generic. As an example, Figure 6 is an excerpt of a specification from the network meta-data. It specifies on line 6 that all fields whose name matches the regular expression "foo.*" and belong to class MyJiniBean should be made persistent. It additionally specifies that the primary key of objects of type MyJiniBean is jiniServiceID.
For the specification of persistence we tried to match existing approaches in EJB. PROSE allows fields to be guarded by means of regular expressions. We use this feature to provide powerful pattern-matching rules for persistence specification.
The throughput and response time of the nodes when running with implicit context and container managed persistence added to all services are depicted in Figure 7. Here, too, the throughput decreases with greater values for m: the operation is much more complex and it involves communication with the database for each service involved.
Each bar in Figure 7.b represents the total response time, in seconds, of the first group of measurements in Figure 7.a. The gray section at the bottom represents the time spent together by Jini and the implicit context functionality. The dark gray section above represents the time spent by PROSE to capture field modifications and submit the corresponding field modification events to the ORM. The white part represents time spent in the ORM to map field updates to database update operations. Finally, the light gray part at the top is the time spent in connections to the database (JDBC).
4.5 3rd Extension: security and context
The ACM extension is distributed to each joining node. When inserted, it either creates an ad-hoc identity (key pair) for the node or uses an existing one. The identity is generated using network-specific knowledge. For outgoing connections, the extension authenticates the node against other nodes, while for incoming ones it authenticates the peers. The transfer of authentication data is performed using the functionality provided
by the implicit context extension.
The ACM extension is updated by the extension base each time a policy change occurs. It contains information on all known identities and all known services. The access control glue functionality intercepts all remote calls of services on the current node and denies or grants access according to its state. The state of an access control extension is represented by an access control list.
Figure 8.a illustrates the throughput of the tests application when joining a container configured to create ad-hoc identities and perform access control for each service call. The throughput is slightly smaller than in the Jini plus CM extension. Figure 8.b illustrates response times for the (1,m) configurations. Each section, from the bottom to the top, represents the time spent for Jini and the CM extension, transferring identities by ACM (dark gray), interception of remote calls by PROSE (white) and access control matrix access (top-most, light gray).
4.6 4th Extension: transactions, persistence, and context
In a spontaneous network, we want to allow transactions of arbitrary complexity. The objective is that, once all nodes have the necessary software layer added to them, the interactions should become transactional. One of the challenges of this scenario is to be able to guarantee transactional correctness without a centralized component. Recently, a solution to this problem has been developed as part of the CheeTah system [PA00] and is commercially available [Ato02].
CheeTah is essentially a small TP-Monitor that resides in each node of the composite system. CheeTah treats each remote call as a subtransaction of a global root transaction. Thus, a service designed to use CheeTah must wrap the application logic with invocations to the mini TP-Monitor inside each node. The management of the nested transactions is transparent to the application code, and (this is the relevant part) entirely local to the node. No matter how nodes interact with each other, as long as each one of them uses CheeTah, the overall result is correct transactional executions that can also be automatically recovered.
One of the main advantages of CheeTah is that it is very light-weight, less than 300 KBytes of code, and very flexible. Being written in Java, CheeTah is also portable. For our purposes here, CheeTah is the ideal transactional component, since it is portable and small enough to allow transferring the whole code from node to node.
4.6.1 Transactional adaptation
The transactional interaction is implemented by the glue functionality bracketing remote calls. The glue functionality determines whether a remote call corresponds to a root transaction or a sub-transaction. With this information, CheeTah can take over and control the execution of the remote calls as if they were nested transactions.
As an example, consider a node $N_1$ that calls method $m$ of a remote node $N_2$. By intercepting the call, the extension functionality can check whether it is associated with any transaction. If it is not the case, then it associates this call with a root transaction $t_1$. As part of the same call to $N_2$, the implicit context functionality sends the root identifier ($t_1$) and node identifier ($N_1$) so that $N_2$ notices (i) that it is running
a sub-transaction and (ii) the location of the parent.
At \( N_2 \), the invocation of \( m \) first executes the extension glue that deals with implicit context. This functionality extracts (and removes) the hidden parameters and sees a root transaction identifier. Accordingly, \( N_2 \) starts a local transaction \( t_2 \) as a sub-transaction of \( t_1 \). The sub-transaction \( t_2 \) is associated to the thread where the invocation of \( m \) runs. In this way, all remote calls made during the execution of \( m \) can be intercepted and treated as sub-transactions of \( t_2 \). For these calls, \( N_2 \) associates the root identifier (\( t_1 \)) with its own node identifier (\( N_2 \)). This is all the information a CheeTah TM needs to detect the structure of a nested transaction [PA00].
When all calls complete, CheeTah must be activated again to gather information regarding all sub-invocations of a thread. This information is used for atomic commitment purposes and it includes the number of sub-transactions invoked [PA00]. As in the forward phase, the CheeTah monitors on each side exchange information among themselves using the implicit context feature. When a CheeTah extension sees that a call that just completed corresponds to a root transaction, this extension starts the 2-Phase-Commit (2PC) protocol. The commitment protocol is performed among the extensions and does not involve intercepting calls. In the current form, the system cannot deal with nodes leaving the community during the commit phase. This and similar issues are typical of transactional interaction and beyond the scope of this paper. Our goal here is to show that, within the limits of conventional transactions, transactional behavior can be dynamically added in a spontaneous network.
### 4.6.2 Transactional specification
```xml
1 <tx_mgr_config>
2 <db_resourse_type>DB_XA</db_resourse_type>
3 <db_pool_size>10</db_pool_size>
4 <obj_db_mapping>PersMgrImpl</obj_db_mapping>
5 <tx_mgr_timeout>30</tx_mgr_timeout>
6 </tx_mgr_config>
7 <transactional_services>
8 <transactional_service>
9 <package_name>ch.ethz.midas</package_name>
10 <class_name>IniBeanInterface</class_name>
11 </transactional_service>
12 </transactional_services>
```
Figure 9: Network meta-data specifying transaction brackets for the methods defined in JiniBeanInterface.
The transactional specification matches closely the design of the system. On the one hand, corresponding to the TM component, a number of network-wide parameters define the type of transactions to be employed in each node (Figure 9, lines 1-6). Examples of such parameters are timeout specification for in-doubt transactions, transaction categories to be used (local, XA, compensating), semantic of high-level locks, and other configuration attributes of CheeTah [Par00]. On the other hand, corresponding to the join-points and associated glue functionality, the names and types of services to be made transactional must be specified (lines 8 - 11). Specifically, line 11 is used to create a service extension that adds transactional glue code to all methods declared in JiniBeanInterface, irrespective of how they are implemented.
4.6.3 Transactional performance
Inserting the TM extension leads to a visible loss of performance. Figure 10.a illustrates the throughput of the system when all service invocations are treated as transactions. The total response time increases significantly (Figure 10.b) because for each level one service invocation, now a root transaction, an additional round of 2PC must be carried out. In Figure 10.b, the top section of each bar represents the time spent for transactional coordination. The dark gray section in the middle represents the time spent in the TM glue, while the bottom part is the time spent for transparent persistence and application logic.
5 Discussion and conclusions
The low throughput reached when several extensions are inserted, TM in particular, might seem disappointing. However, the experiments performed are really a stress test. Interactions are very complex (involving between 6 and 30 remote accesses) and, once transactions or persistence are involved, also very costly (in the case of TM the 2PC at the end involves many nodes and, in the case of persistence, every service triggers a remote transaction in the centralized database). A more realistic load, specially if it is manually generated by users, will be much less demanding and results in a considerable higher throughput. Obviously the network bandwidth plays a very important role here specially given that it is quite limited in existing wireless environments. This bandwidth, however, will only grow in the future and advances in network protocols such as dynamic scatternets, multi-hop frequency access, and simply larger bandwidth will alleviate this situation.
With this in mind, the results provided are quite encouraging. We do not pretend that the container is ready to be used in a large scale network. Nevertheless, the experiments show that the mechanisms for creating a spontaneous information system do not have a significant effect in the overall performance when compared with the intrinsic cost of container managed persistence or transactions. These high costs are inherent to the nature of the extensions and have little to do with the method used to insert them into the application. Even if the insertion happens at deployment time, the experiments show that the larger part of the costs are produced at run-time and are independent of the insertion method chosen. There are many other data management extensions that can be of great use in such environments that will not incur in such high cost. Examples are extensions for load balancing access to databases in base stations, for replication of RMI calls to a server and a backup, for best-effort logging based on files rather than database transactions, etc. For these extensions, the spontaneous container offers significant advantages over existing infrastructures.
Compared to other platforms that promote adaptability based on specialized hardware or software platforms, our approach is entirely compatible with Jini (the adaptation is exported as as service, the weaver). Even as a first step, the results provided constitute an excellent indication of where to optimize information systems so that they can be efficiently used in an ad-hoc computing environment. In this regard, the spontaneous container is public domain software and, as our experiments have shown, can be a very power-
ful platform for experimenting with issues related to mobility and reconfigurability of the IT infrastructure.
References
|
{"Source-Url": "https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/68740/eth-4407-01.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 8504, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 63599, "total-output-tokens": 10752, "length": "2e13", "weborganizer": {"__label__adult": 0.0002911090850830078, "__label__art_design": 0.00029730796813964844, "__label__crime_law": 0.0002543926239013672, "__label__education_jobs": 0.0005087852478027344, "__label__entertainment": 6.502866744995117e-05, "__label__fashion_beauty": 0.0001329183578491211, "__label__finance_business": 0.00023353099822998047, "__label__food_dining": 0.00025653839111328125, "__label__games": 0.00036406517028808594, "__label__hardware": 0.0011730194091796875, "__label__health": 0.0003826618194580078, "__label__history": 0.000263214111328125, "__label__home_hobbies": 7.474422454833984e-05, "__label__industrial": 0.0003669261932373047, "__label__literature": 0.0002161264419555664, "__label__politics": 0.00020182132720947263, "__label__religion": 0.0003879070281982422, "__label__science_tech": 0.036865234375, "__label__social_life": 7.575750350952148e-05, "__label__software": 0.00963592529296875, "__label__software_dev": 0.94677734375, "__label__sports_fitness": 0.0002319812774658203, "__label__transportation": 0.0005164146423339844, "__label__travel": 0.0002036094665527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47574, 0.03555]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47574, 0.38272]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47574, 0.90358]], "google_gemma-3-12b-it_contains_pii": [[0, 398, false], [398, 4851, null], [4851, 10180, null], [10180, 15015, null], [15015, 18640, null], [18640, 22312, null], [22312, 26634, null], [26634, 29503, null], [29503, 32746, null], [32746, 36056, null], [36056, 39269, null], [39269, 42639, null], [42639, 47574, null]], "google_gemma-3-12b-it_is_public_document": [[0, 398, true], [398, 4851, null], [4851, 10180, null], [10180, 15015, null], [15015, 18640, null], [18640, 22312, null], [22312, 26634, null], [26634, 29503, null], [29503, 32746, null], [32746, 36056, null], [36056, 39269, null], [39269, 42639, null], [42639, 47574, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47574, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47574, null]], "pdf_page_numbers": [[0, 398, 1], [398, 4851, 2], [4851, 10180, 3], [10180, 15015, 4], [15015, 18640, 5], [18640, 22312, 6], [22312, 26634, 7], [26634, 29503, 8], [29503, 32746, 9], [32746, 36056, 10], [36056, 39269, 11], [39269, 42639, 12], [42639, 47574, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47574, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
f43d4a317181e13a86b9e7833c068a07d320ee31
|
[REMOVED]
|
{"Source-Url": "http://iir.ruc.edu.cn/~jchchen/DASFAA13SPARQL.pdf", "len_cl100k_base": 8784, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 39546, "total-output-tokens": 10702, "length": "2e13", "weborganizer": {"__label__adult": 0.0003571510314941406, "__label__art_design": 0.00051116943359375, "__label__crime_law": 0.0004856586456298828, "__label__education_jobs": 0.0022106170654296875, "__label__entertainment": 0.00016391277313232422, "__label__fashion_beauty": 0.0002340078353881836, "__label__finance_business": 0.0007953643798828125, "__label__food_dining": 0.00044798851013183594, "__label__games": 0.00074005126953125, "__label__hardware": 0.0012969970703125, "__label__health": 0.0007381439208984375, "__label__history": 0.00048732757568359375, "__label__home_hobbies": 0.00016641616821289062, "__label__industrial": 0.0006690025329589844, "__label__literature": 0.0005993843078613281, "__label__politics": 0.00033164024353027344, "__label__religion": 0.0005292892456054688, "__label__science_tech": 0.3623046875, "__label__social_life": 0.00019860267639160156, "__label__software": 0.035247802734375, "__label__software_dev": 0.59033203125, "__label__sports_fitness": 0.0002453327178955078, "__label__transportation": 0.0005965232849121094, "__label__travel": 0.00024390220642089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40513, 0.04821]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40513, 0.3282]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40513, 0.88041]], "google_gemma-3-12b-it_contains_pii": [[0, 2608, false], [2608, 5239, null], [5239, 8197, null], [8197, 9971, null], [9971, 12290, null], [12290, 15664, null], [15664, 17778, null], [17778, 21404, null], [21404, 23777, null], [23777, 26510, null], [26510, 29237, null], [29237, 32207, null], [32207, 34947, null], [34947, 37272, null], [37272, 40513, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2608, true], [2608, 5239, null], [5239, 8197, null], [8197, 9971, null], [9971, 12290, null], [12290, 15664, null], [15664, 17778, null], [17778, 21404, null], [21404, 23777, null], [23777, 26510, null], [26510, 29237, null], [29237, 32207, null], [32207, 34947, null], [34947, 37272, null], [37272, 40513, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40513, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40513, null]], "pdf_page_numbers": [[0, 2608, 1], [2608, 5239, 2], [5239, 8197, 3], [8197, 9971, 4], [9971, 12290, 5], [12290, 15664, 6], [15664, 17778, 7], [17778, 21404, 8], [21404, 23777, 9], [23777, 26510, 10], [26510, 29237, 11], [29237, 32207, 12], [32207, 34947, 13], [34947, 37272, 14], [37272, 40513, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40513, 0.07143]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e78c4c605f020a20c5c07e62c7627d90e99a3216
|
AN ALGEBRA OF RELATIONS FOR MACHINE COMPUTATION
the paper that should have been the game changer, with annotations by Hugh Darwen, 2014
Patrick Hall, Peter Hitchcock, Stephen Todd
IBM (UK) Scientific Centre
Neville Road
Peterlee
Co. Durham
England
Abstract:
This paper extends the relational algebra of data bases, presented by Codd [4] and others, in four areas. The first is the use of selector names to remove order dependencies from the columns of a relation. This use of selector names enables us to define a more general class of operations, which include the normal relational operations of union, equijoin etc., as special cases. Thirdly we introduce relations represented algorithmically as well as by a stored set of tuples. Such computed relations cannot always be effectively realised as a finite set of tuples. Finally we consider relational expressions as algorithmic representations of relations and characterize their effectiveness.
HD: I suggest that this paper would have been a game changer—showing the right way forward with the relational model—if only it and its aftermath in software development at the IBM UK Scientific Centre had received wider attention than apparently it did.
The original text was in the usual two-column format and in something like Courier throughout. In other respects I have tried to retain the original formatting.
I believe I read this paper in 1978, when my colleague Brian Rastrick and I visited Peterlee to learn about ISBL, the language, and PRTV, its implementation, but I didn’t know much about logic and set theory at that time and much of the formal text would have been over my head. Instead, I was heavily influenced by the language, ISBL, whose relational operators are based on the abstract algebra defined herein, in the same way that the relational operators of Tutorial D [16] are defined in terms of the abstract algebra, A, defined by Chris Date and myself [16] and first published in 1998.
The first implementation of ISBL, named IS/1, was developed at the Scientific Centre as early as 1972. IS/1 was later renamed Peterlee Relational Test Vehicle (PRTV) at the behest of executive management at IBM.
I have shaded in yellow some points I find particularly noteworthy.
In my notes, labelled HD1 to HD23, the abbreviations HHT, TTM and BS12 refer to the subject paper by Hall, Hitchcock and Todd, The Third Manifesto [14], and Business System 12 [15], respectively.
Acknowledgments: Stephen Todd reviewed the first draft of my annotations and in so doing gave me some additional material as well as correcting a couple of errors. Chris Date did the same.
1. Motivation.
We may think of a relation as a table with rows and columns. Row ordering is unimportant but in the relational algebras developed so far a knowledge of column ordering has been necessary to specify certain operations. When tables have many columns, this can be tedious. Codd [4] discusses the use of what he calls relationships, where the components of a tuple or relation are identified by role name and not position. The formal details of relationships were not followed up by Codd, as he was only interested in them for user convenience. He does not seem to have considered the role names of the result of a relational expression. This is not a straightforward problem. For example, given a relation with role names A and B identifying the columns, if we use the project operator to define a new two column relation which has both columns identical to the original A column, the role names cannot be simply inherited. Problems concerning the use of role names have not been fully realised in the literature, let alone adequately solved. Our solution involves generalizing the relational operations so that the use of role names controls the semantics of the operations. The particular method we present below is one approach: we have concurrently also developed an approach based on characteristic functions and lambda notation.
The “role names” mentioned here are what soon became known as attribute names. The authors identify the very problem that BS12 planners (myself and Brian Rastrick) were confronted with in 1978. We had been assuming all along, from Chris Date’s teaching, that attribute names would be used to identify “columns” but couldn’t figure out how the operators of Codd’s algebra would work and we found serious problems with the use of “dot qualifiers” in his calculus notation. Codd did later consider the “column naming” problem and attempted to address it; but his treatment was rather ad hoc and in fact seriously deficient (see [19], pages 148-152).
The example given, suggesting that the project operator can be used to duplicate a column, will no doubt give cause for surprise. The authors are assuming the definition of “generalized project” (Defn 3.5 in Section 3), which incorporates attribute renaming and allows the same attribute to be “renamed” more than once, so to speak. But in that case, all except one of those “renamings” is really an extension.
The remaining text in this section begins to address the other big concern we had in 1978: given a relation with attributes qty and price, for example, how to obtain a relation that gives the total cost (qty*price) for each item. Codd’s algebra did not address this issue.
The second area we have attacked has been that of including functional operations on relations. We only consider here first order functions (those which take a row of a relation and produce a row of a result relation) as opposed to second order ones which act across tuples, see Hitchcock [9]. Such operations have been easy to specify in the relational calculus, due to the ability to reason with free variables. Consider:
\[ C = \{ <a,b,c> : b = \sin(a) \text{ & } <a,c> \in D \} \]
We would define such an operation on D by defining a relation SINE using an algorithm, and combine SINE and D using one of our generalised operations. While considering first order operations on relations, we have also considered the storing of
sets as procedures which would successively generate the elements of the set—for example, the set INT of non-negative integers.
Algorithmic relations cannot be used as freely as stored relations. For example:
(i) a typical relation SINE may only be used to compute the sine given the angle, but not the inverse.
(ii) the relations SINE and INT are potentially infinite and the evaluation of relational expressions involving them may not terminate.
2. Relations with selectors.
HHT now uses the term selector for the attribute names that Section 1. Motivation referred to as role names. (TTM uses “selector” for something completely different.)
We develop a precise notion of relation. In normal usage, this is done via set, tuple, and cartesian product. We take a parallel approach via set, tuple with selectors, and cartesian product with selectors, and define a relation with selectors which is equivalent to Codd’s ‘relationship’. Because we deal only with tuples with selectors we will drop the ‘with selectors’ after the initial exposition.
We assume a universal underlying set U. The elements of U are called objects. In a working system we would further divide the objects into ‘domains’—this gives additional complications with compatibility of operators, etc. which we will not consider here. However we will discuss multiple domains with respect to their effectiveness in later sections.
These “objects” are what TTM refers to as values. HHT does not mention whether relations are also objects—i.e., whether a relation can be an attribute value—but whether they are or not has no effect on the basic algebra. I assume that the domains mentioned here are what we now call types. The kinds of operations in which relations as attribute values become relevant—grouping and aggregation, for example—are not mentioned in HHT, though aggregation via grouping was later supported in PRTV.
Defn. 2.1: a tuple t, with selectors S, is a function t:S → U. S is the selector set for t, and an element of S is a selector name for t. The elements of the range of t are called the objects of t. t(s) is called the value of s for the tuple t.
TTM defines a tuple as a set of ordered triples, <a, v, t>, where a is an attribute name, v a value, t the type of v, and no two triples within the same tuple have the same a. As t is implied by v, TTM’s definition is entirely consistent with HHT.
Defn. 2.2: the cartesian product U(S) with selectors S is the set of all tuples with selectors S. TTM calls this set a tuple type.
Defn. 2.3: a relation with selectors S is a set of tuples with selectors S. Equivalently a relation with selectors S is a subset of the cartesian product with selectors S.
TTM uses the term body for the set of tuples and defines a relation as a heading paired with a body, the heading being a function of S to type names
which merely incorporates the “domains” that the authors have identified as being needed in a “working system”.
Defn. 2.4: the degree of a relation (tuple, cartesian product) with selectors $S$ is the number of elements of $S$. □
Defn. 2.5: the cardinality of a relation is the number of tuples in the relation. □
**HD8** *TTM’s definitions of degree and cardinality are equivalent to HHT’s.*
**Example 2.1:**
Consider the relation $R$, represented by the table
<table>
<thead>
<tr>
<th>Cust no.</th>
<th>part no.</th>
<th>qty</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>49</td>
<td>3</td>
</tr>
<tr>
<td>241</td>
<td>74</td>
<td>7</td>
</tr>
<tr>
<td>333</td>
<td>33</td>
<td>3</td>
</tr>
<tr>
<td>123</td>
<td>50</td>
<td>4</td>
</tr>
</tbody>
</table>
The selector set for $R$ is cust no, part no, qty. The degree is 3, the cardinality 4. $R$ contains a tuple, $t$, which we can represent by:
<table>
<thead>
<tr>
<th>Cust no.</th>
<th>part no.</th>
<th>qty</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>49</td>
<td>3</td>
</tr>
</tbody>
</table>
This tuple might mean that customer number 123 has ordered 3 of part number 49. We can identify the quantity object for this tuple using the selector name qty, because $t(qty) = 3$.
**HD9** In *Tutorial D* $t(qty)$ becomes qty FROM $t$.
It is important to note that not only is the row ordering of the table representation of $R$ unimportant, the column ordering also does not matter. It is not meaningful to talk of the third column of a relation with selectors, rather we talk of the qty column. Also the column headings must be carried into a representation of the tuple.
**HD10** That last sentence is consistent with *TTM*. The point about column ordering was crucial for BS12. Prerelational scripting languages for queries and reports used field names for record components and scripts did not depend at all on the order in which the fields appeared. To introduce such a dependency would have been a backward step. (For “column headings”, read “selectors”, i.e., attribute names again.)
### 3. The Generalised Operators.
We begin with a definition of all the operators, and then explain them in terms of the usual relational operators. Finally we discuss the algebra induced by these operations.
For all the definitions, we assume that the relations $R_1, R_2, Q$, have selector sets $S_1, S_2, P$, respectively.
**Notation:** $t|S$ means function $t$ restricted to the subset $S$ of its domain; $t \circ q$ is functional composition, and is the function whose application to object $x$ is equivalent to $t(q(x))$; $S_1.S_2$ means the usual set intersection of sets $S_1$ and $S_2$.
**Defn. 3.1. Generalised Intersection.**
$R_1 \ast R_2$
= \{\{t: t \in U(S_1 \cup S_2) \land t|S_1 \text{ in } R_1 \land t|S_2 \text{ in } R_2\}\}.$
Generalised intersection is the \texttt{AND} operator of the abstract algebra \( A \) defined by Chris Date and myself and also \textbf{Tutorial D}'s JOIN (Codd's "natural join").
**Defn. 3.2. Generalised Union.**
\[ R_1 + R_2 = \{ t : t \text{ in } U(S_1 \cup S_2) \text{ & } ( t \mid S_1 \text{ in } R_1 \lor t \mid S_2 \text{ in } R_2 ) \} \]
Generalised union is the \texttt{OR} operator of \( A \). Note that if some attribute of either operand is of an infinite type and is not a common attribute, then \( r_1 \texttt{OR} r_2 \) is infinite. This issue is discussed in Section 4 of HHT.
**Defn. 3.3. Project.**
Let \( P \) be contained in \( S_1 \),
\[ R_1 \mid P = \{ t' : t' = t \mid P \text{ & } t \text{ in } R_1 \} \]
In \( A \), projection is defined in terms of an attribute of the relation operand that is to be excluded: \( r \texttt{REMOVE} A \) denotes the projection of \( r \) on all attributes except \( A \). BS12 and \textbf{Tutorial D} both allow projection to be defined either way: attributes to be retained or those to be dropped—but this is in any case only a matter of convenience.
**Defn. 3.4. Generalised Difference.**
\[ R_1 - R_2 = \{ t : t \text{ in } R_1 \text{ & } t \mid (S_1.S_2) \text{ not in } (R_2 \mid (S_1.S_2)) \} \]
A supports negation by defining \texttt{NOT} \( r \) to be the relation complement of \( r \): in HHT terms, the tuples of \( U(S) \) that are not contained in \( r \). This is not suitable for computational purposes because when an underlying domain is infinite, it yields an infinite relation. HHT’s \( R_1\texttt{-R2} \) can be defined as \( R_1 \texttt{AND} (\texttt{NOT} R_2) \), which becomes \( R_1 \texttt{NOT MATCHING} R_2 \) in \textbf{Tutorial D}.
David Maier [17] refers to this operator as \textit{antijoin}. The term \textit{semidifference} has also been advanced for it, as being the opposite of \textit{semijoin} (MATCHING in \textbf{Tutorial D}), a shorthand for which ISBL did not have a counterpart.
**Defn. 3.5. Generalised Project.**
Let \( q \) be a function from selector set \( P \) to selector set \( S_1 \),
\[ R_1\%q = \{ t' : t' \text{ in } U(P) \text{ & } (\text{there exists } t \text{ in } R_1 : t' = t \circ q) \} \]
An element of \( q \) might be written as \( a \rightarrow b \), where \( a \) is a selector of \( R_1 \) and \( b \), if not equal to \( a \), is a selector that is not a selector of \( R_1 \). Thus we can see here, in addition to projection, the beginnings of the operators that later came along for attribute renaming and extension (RENAME and EXTEND in \textbf{Tutorial D}—note that RENAME can be defined in terms of extension followed by projection).
BS12’s projection operator was equivalent to generalised project except that q was required to be injective, meaning it could not be used to “duplicate columns”. Here “selector set P” appears to be an arbitrary set of selectors, not as in Defn. 3.3, where it is “contained in S1”.
The generalised intersection is probably the most useful operator. When R1 and R2 are of the same type (ie. S1=S2), the generalised intersection is set intersection. At the other extreme, if S1 and S2 do not overlap, we have defined a cartesian product or ‘quadratic join’ of R1 and R2. In between, we have an equijoin [4] of the two relations, the joining components being the selectors common to both R1 and R2 (i.e. S1 . S2).
**HD16** Actually, it was Codd’s “natural join”, not equijoin, that used the common selectors. The equijoin of r1 and r2 on A=B, where A is an attribute of r1 and B an attribute of r2, retains both attributes A and B—and so gets into trouble when A and B have the same name!
Generalised intersection can also cover the notion of select. Suppose we have a selection criterion or ‘filter’ defined on certain selectors. We can define a relation F (possibly infinite) with selector set containing just those selectors, and whose tuples are just those tuples which satisfy the filter. Then the selection on a relation R using the filter is the same as the generalised intersection between the relation R and the relation F derived from the filter.
**HD17** This observation is emulated in the description of A, which does not include a selection or restriction operator. It hints at the idea, discussed later in the paper, of allowing the selection condition to use whatever boolean operators are available, in any combination. A similar observation applies to the next paragraph, hinting at the possibility of the operator we now call extension.
Another use of intersection is functional application. A relation, viewed as a function mapping a subset of its components to the remaining components, can be applied to a set of arguments to yield a set of results.
Generalised union degenerates to normal union when R1 and R2 have the same selectors. When the selectors are different, generalised union seems to have useful properties related to undefined values and the merging of heterogeneous files. We do not fully understand the implications of these yet.
**HD18** A commendably cautious treatment, considering what happened later when outer joins were added SQL!
Generalised project has several uses. It can be used to duplicate components, but more usefully to rename components that are to be used in a join. It is informally called rename. For example, suppose we have relations, one of which has a component date-due, and the other of which has a component date-returned. If we wished to join on these components (to find which books were returned on the last possible day), we would need to rename at least one component so that the names were identical. Generalised intersection would then perform the join.
**HD19** Regarding its use “to duplicate components”, see HD14. In Tutorial D we define separate operators for projection and attribute renaming. Duplicating a component is done with EXTEND. I understand that in the later versions of PRTV projection, renaming and extension were all combined in a single operator, as in SQL’s SELECT clause. This idea was considered for BS12 but
was not adopted because we wanted multiple attribute renamings to be simultaneous, in parallel—to allow names to be swapped, for example— whereas we wanted multiple extensions to be done consecutively, so that, for example, \( \{ y := x + 1, z := y \times 2 \} \) and \( \{ y := x + 1, z := (x + 1) \times 2 \} \) would be equivalent.
Project is a special case of the generalised project, where \( q \) is the inclusion mapping of \( p \) into \( S_1 \). We have included it for notational convenience.
Finally we come to generalised difference. Once again, if \( R_1 \) and \( R_2 \) have the same selectors, then this degenerates to set difference. As an example of its more esoteric use, let \( R_1 \) and \( R_2 \) represent a hierarchy with \( R_1 \) containing the tuples of the parent segments, \( R_2 \) the tuples of the child segments, including the fully concatenated key of the parent. The selectors corresponding to the fully concatenated key are the only overlap of \( S_1 \) and \( S_2 \). Then \( R_1-R_2 \) gives the childless parent records, and \( R_2-R_1 \) gives the orphaned child records.
Using these operations we can combine relations within expressions. Each expression defines a relation. The selectors of this relation can be readily obtained from the selectors of the operand relations as given below:
\[
\begin{align*}
\text{selectors}(R_1 \times R_2) &= \text{selectors}(R_1) \cup \text{selectors}(R_2) \\
\text{selectors}(R_1 + R_2) &= \text{selectors}(R_1) \cup \text{selectors}(R_2) \\
\text{selectors}(R_1 \mid P) &= P \\
\text{selectors}(R_1 - R_2) &= \text{selectors}(R_1) \\
\text{selectors}(R_1 \% q) &= \text{domain}(q)
\end{align*}
\]
**HD20** The choice of “\( \text{domain}(q) \)” in that last definition is a little unfortunate, considering the previous references to Codd’s use of the term domain. Here it appears to mean the domain of the function \( q \), which is indeed a set of selectors.
Relations with selectors, together with the binary operations (generalised) union, intersection, and difference, form a Boolean Algebra. To demonstrate this structure, we require a universal element, a null element, partial ordering relationships, and must then verify the various idempotency and other algebraic laws. For the universal element, we suppose that all selectors are drawn from a set \( \Sigma \) of selectors, when the universal element is \( U(\Sigma) \). The null element is the empty relation of no selectors (\( U(\phi) \) is the cartesian product of no selectors, which is either empty or full, these two conditions being effectively the truth values).
**HD21** That last sentence delightfully hints at what many years later became TABLE_DEE and TABLE_DUM. I question “being effectively the truth values”. A relation is a relation and a truth value is a truth value. Under the interpretation of a relation as the extension of a predicate, a relation of degree zero is the extension of a predicate with no parameters, which therefore has just one instantiation. The empty relation denotes falsehood for that single instantiation whereas the “full” one (containing a single tuple) denotes its truth.
The ordering relationship is relation inclusion, where a relation \( R_1 \) contains another relation \( R_2 \) if the selectors of \( R_1 \) include the selectors of \( R_2 \), and the difference \( R_2-R_1 \) is empty. The various algebraic laws then follow, with universal complement obtained from generalised difference.
4. The Representation of Relations and their Effectiveness.
Sections 4, 5, and 6 are rather heavy going, a lengthy investigation leading to the important conclusions, in Section 7, that I alluded to in HD2. We might even conjecture that the undue amount of text on this particular topic got in the way of the most important points of this paper, thereby reducing its impact back in 1975.
I don’t give any annotations on these three sections. Rather, in Section 7, Conclusions, I try to show what the important hints given in that section were really driving towards.
When we come to embody relations in computing machinery, we find two extremes. At the one end of the spectrum we have relations which are stored explicitly as sets of tuples, and at the other end we have relations which are pure predicates, ie. we can only recognise if a given tuple is in the relation or not. (We do not consider those relations whose characteristic functions are not decidable). In the Preceding sections defining relations and the operations on them we deliberately do not distinguish between such relations. However they have very different properties when it comes to generating their contents, as the following examples make clear. What we wish to do in the following sections is to explore the different behaviour of such relations, and expressions involving them, when we try to generate them.
Examples :
Ex. 4.1: If we are given a representation of a relation which is a table, as in Ex. 2.1, then we can generate all of the tuples in the table without additional input. These are the relations as they are usually conceived in relational data bases.
Ex. 4.2: We have a binary relation, SINE, between an angle and its sine, which is represented by a procedure to calculate the sine of a given angle. We cannot generate the extent of the relation itself, although we can generate the result of a relational expression which applies it to a finite set of angles. We cannot, however, apply the representation of sine so that we obtain arcsines.
Ex.4.3: If we have a relation which is a pure predicate, such as a test whether a given point is in a particular polygon, then we require that it is used in relational expressions as a pure predicate. We call such representations of relations recognisers.
Ex. 4.4: Returning to ex. 4.2, we could consider the same representation augmented by a second procedure which computes the arcsine, thus allowing the representation to be used in both directions. Note that this still does not give a generation capability.
We will use the term ‘effectiveness’ to talk about the generation properties of the representations of relations and relational expressions.
We will characterise the effectiveness of a representation in terms of a set of input sets. We will call a subset of the selectors of a relation an ‘input set’ for the representation if, given values for these selectors we can then obtain a complete set of values. We gather together all such input sets into a set of input sets which is the effectiveness of the representation. Note that if any set of selectors P is an input set, then any set Q containing P is also an input set.
Returning to our examples above, we now characterise the effectiveness of their representations using sets of input sets. We write the effectiveness of a representation of relation R as E(R).
Ex. 4.5: for Ex. 4.1 we have
\[
E(R) = \{ \emptyset, \{\text{cust no}\}, \{\text{part no}\}, \{\text{qty}\}, \{\text{cust no, part no}\}, \{\text{cust no, qty}\}, \{\text{part no, qty}\}, \{\text{cust no, part no, qty}\}\}.
\]
This reads ‘given no inputs, or any set of inputs, we can obtain all the other components.’
Ex. 4.6: for Ex. 4.2 we have
\[
E(\text{SINE}) = \{x\}, \{x, sx\}.
\]
This reads ‘given the value of the angle x, we can obtain the value of the other component, namely the sine sx of the angle; or given both angle x and number SX, we can test whether sx=sin(x).’
Ex. 4.7: for Ex. 4.3 we have
\[
E(\text{POINT-IN-POLYGON}) = \{\{\text{point, polygon}\}\}.
\]
This reads ‘given a point and a polygon, we can test whether the point is in the polygon’.
Ex. 4.8: for Ex. 4.4 we have
\[
E(\text{SINE}) = \{\{X\}, \{SX\}, \{X, SX\}\}.
\]
This reads ‘given the angle, we can compute the sine, or, given the sine, we can compute the angle, or, given both, we can test for membership’.
To use effectiveness in a particular case, we find the set P of selectors for the input values. If P is in the effectiveness, we know that the representation will be adequate for this case.
The view of relations as mappings from a subset of the components to another subset of the components is by no means new. It has even occurred within relational data bases, as the ‘mapping’ concept of SQUARE [1] and SEQUEL [2], where it is used as a user-friendly language device; and it has appeared as a means of representing functional constraints within ‘normalisation! [5,12].
5. The Effectiveness of Expressions.
When relations are combined by relational operators within an expression, the result is also a relation. If we have representations of the operands, then the relational expression together with these representations forms a representation of the result. Given the effectiveness of the representations of the operands, we can deduce the effectiveness of the expression.
We require, for each relational operation, to specify the effectiveness of the result as a function of the effectiveness of the operands. In the following we will write the selectors (R1)=S1, the selectors (R2)=S2, so that the two relations R1 and R2 have selectors $S1 \Delta S2$, the symmetric difference between S1 and S2, in common.
\[
E(R1*R2) = \{P : (P.S1 \text{ in } E(R1) \text{ and } (P \cup S1).S2 \text{ in } E(R2)) \text{ or } (P.S2 \text{ in } E(R2) \text{ and } (P \cup S2).S1 \text{ in } E(R1))\}\.
\]
\[
E(R1+R2) = \{P : P.S1 \text{ in } E(R1) \text{ and } P.S2 \text{ in } E(R2) \text{ and } P \text{ contains } S1 \Delta S2\}\.
\]
E(R1 P) = \{ Q : Q \text{ contained in P, and Q in E(R1)} \}.
E(R1 - R2) = E(R2) if S1.S2 in E(R2) \emptyset otherwise.
E(R1 \% q) = \{ P : q(P) in E(R1) \}.
Note that the last three operations all involve quantification, and that in some cases it is possible that the operation is totally ineffective, with no input sets at all.
To see that these meet our intuition concerning the relational operations, we require some notion concerning the evaluation mechanism for relational expressions. The evaluation would be done ‘tuple at a time’, and given the values of some of the selectors, the mechanism would attempt to find values for the remaining selectors by requesting a complete tuple from one of the operand sub-expressions and then use the extra information so gained to fill in the remaining values from the other operand, ensuring that the constraints required by the particular operation are met before returning the completed tuple to the invoking procedure.
For intersection we could use the given values on either operand with equal effectiveness, and the values gained from one operand can be input into the other to obtain the complete tuple.
Ex 5.1: we have relation INT with selector x and effectiveness {\emptyset, \{ x \}}, and relation SINE as in Ex. 4.6. Then to evaluate INT*SINE (which has selectors x and sx) using no inputs, we must look to either INT or SINE for a complete tuple. INT can provide it, and this value x can then be supplied to SINE to obtain the sx value. The effectiveness of INT*SINE from the formula above is {\emptyset , \{ x \}, \{sx\}, \{x, sx\}} , which tells us we have a generator, and agrees with the description of how we would actually evaluate the expression.
For union we are not able to carry over values gained from one operand to help us evaluate the other operand, and any input values must be used independently on both operands.
Ex. 5.2: suppose that in addition to the SINE relation already introduced, we have a relation COSINE with selectors x and cx and effectiveness {{x }, {x, cx }}. Then SINE+COSINE with selectors x, sx, and cx, contains all tuples <x, sx, cx> where either sx=sin(x) or cx=cos(x). To use this relation, supplying a value for x is not enough. Given the value of x we can find one tuple from the relation by applying SINE and COSINE separately, but we cannot find all the tuples with this particular x. We only require that one of the two relations is satisfied, so for example applying SINE to x would fill in the sx value, to which we would then need to add all possible values for cx. We could do this last by using a representation of the underlying set; this possibility will be investigated in the next section; here we assume that the underlying set cannot be used, and so we are unable to realise the relation SINE+COSINE given only an x value.
However, if we supply a pair of values, say x and sx, then we may be able to fill in everything—if it happens that sx is not equal to sin(x), then applying COSINE to x will complete the tuple. But if sx=sin(x), we are back in the earlier situation, and cannot realise the relation.
The only circumstance under which we can effectively use the relation SINE+COSINE is if we are given all three values, and the relation is used as a pure recogniser. From the formulae, we find that the effectiveness of SINE+COSINE is \{ \{x, sx, cx \} \}, which tells us that we only have a recogniser.
There are examples where union does provide more than a recogniser, such as the combination of SINE and COSINE where the selector cx of COSINE has been renamed sx. Then SINE+COSINE has selectors x and sx and has effectiveness \{\{x\}, \{x, sx\}\}.
When generalised union degenerates to a normal union, S1 S2 becomes the empty set. Looking at the definition of effectiveness of union, we see that P always contains S1 As2, and the effectiveness becomes greater. The end of the last example was a case in point.
Difference, project, and generalised project can all involve existential quantification. The evaluation mechanism for this would involve supplying the values for the unquantified selectors, and then applying the relation over which quantification is made to test for the existence of a member.
Ex. 5.3: to illustrate how quantification would be handled, let us take a very simple example. Let us find the set of numbers sx for which there is angle x such that sin(x)=sx. We will use the relation SINE from examples 4.6 and 4.8. The set that we want is obtained by projection, and is SINE \mid \{sx\} , with selector sx.
Suppose that SINE has effectiveness \{\{sx\}, \{sx, x\}\}. If we are given a value of sx, we can apply SINE to this to find a value of x, if any. If we find one or more values of x, then we know that one exists and the value of sx is in the set; and if no values of x are found, then sx is not in the set. Clearly then SINE \mid \{sx\} is effective as a recogniser, and we are using an effectiveness
\[ E(SINE \mid \{sx\}) = \{\{sx\}\}. \]
However, suppose that the effectiveness of SINEis \{\{x\}, \{sx\}\}. We are helpless unless a value of x can be obtained from somewhere. It cannot be input, and the only way to obtain a value would be to use a representation of the set underlying the selector x. Possibilities in this direction are taken up in the next section. Without using the underlying set, the representation is ineffective, and thus
\[ E(SINE \mid \{sx\}) = \{\}. \]
If SINE was stored explicitly as a table, then it would have effectiveness \{\phi, \{x\}, \{sx\}, \{x, sx\}\} , in which case no input would be necessary, and we could directly generate the set required. The effectiveness would be
\[ E(SINE \mid \{sx\}) = \{\phi, \{sx\}\}. \]
In all cases these formulae are obtained directly from the formulae given above.
As explained above, our interest in the effectiveness of relational expressions centres on whether or not they represent relations which are generators. The test for this is whether E(expression) contains the empty set. The process which computes the effectiveness of an expression should also record the way this was obtained, and thus record the particular evaluation process whereby the relation can be generated.
Relations as mappings from one of their components to all the components when the relations are stored explicitly as sets of tuples, can be viewed as ‘access paths’ in the sense of Earley [7]. Note, however, that here we have been considerably more general in that all the data need not be stored explicitly. With access paths in complex expressions one finds that many combinations of paths will yield the same result. A small extension of our approach in computing the effectiveness of an expression will develop particular choices of access paths, and given that different access paths have
different speeds, a direct application of dynamic programming principles will select
the best combination of access paths. A problem similar to this has been tackled by
Stocker and Dearnley [13], but without the theoretical foundations. Note that such
selections of best access paths can be very tricky, as has been pointed out by Hall [8]
in the context of optimization.
Two results that can be obtained with our theory are interesting. These are stated
below, without proof.
Proposition 5.1: if a relational expression represents a generator, then at least one
operand of the expression is also a generator.
Proposition 5.2: Relational expressions that are equivalent within the algebra have the
same effectiveness.
So far we have ignored the problem of infinite or potentially infinite relations. If we
allow the generation of infinite (or indeed, very large) relations, we encounter three
problems. One is the impossibility of displaying explicitly infinite sets, The others are
more subtle. The result of a complete expression can be finite, even though some
intermediate result is infinite; if we were to generate fully the intermediate relation,
we would never terminate. In using a generator as a recogniser, we must be able to
terminate in the case that the element is not in the set.
The termination problem needs solving. The tuple at a time evaluation mechanism
gives a better solution then the complete evaluation of intermediate results, but this is
not enough. Consider for example, the intersection of the countably infinite relation
INT with the relation SMALL of selector x, which is a pure recogniser for numbers
less than 100. INT*SMALL is then finite, containing all the integers less than 100,
but evaluation of INT*SMALL would have to continue generating integers using INT
ad infinitum just in case SMALL accepts another one. We must exploit further
information in order to be able to terminate the generation of INT. The solution that
we offer is to postulate orderings, either partial or total, on all underlying sets.
However these orderings must be reasonable, in the sense that they will be useful in
the solution to our problem. Not all orderings will be acceptable. For example, for
strings the usual lexicographic ordering would be unreasonable, while the
interpretation of strings as integers base r, where r is the number of symbols, is
reasonable. These orderings then induce partial orderings on all relations, Now to
control the termination of the evaluation of relational expressions, we propose to use
supremum-infimum pairs as bounds for relations. We can then test an expression to
ascertain the bounds for all the intermediate results, given the bounds for the
operands. The method is obvious. If all generations proceed monotonically with
respect to their respective orderings, then the bounds can be used to enforce
termination upon potentially infinite relations.
The conversion of an generator for an infinite set into a terminating recogniser is a
classical problem, and is solved by the method above, using orderings of the sets to
detect termination in the case of non membership.
6. Using the Underlying Sets
In the discussion above we have assumed that the objects came from a single
underlying set U. In discussing the effectiveness of the representation of relations, we
have not assumed anything concerning the representations of the underlying sets. However, it is well known from recursive set theory that if the underlying set is represented by a generator, then any recogniser of a set can be converted to a generator by generating successively potential members of the set and then testing for membership before releasing the element. This is equally true for relations, but in order to discuss this adequately we will suppose that we have a collection of underlying sets
\[ \Omega = \{ U_i : i \text{ in some index set} \} \]
The single set of objects U treated so far is simply the union of these underlying sets U_i. We now associate a single underlying set with each selector, with a mapping
\[ d: \Sigma \rightarrow \Omega \]
For each selector we will talk of the effectiveness of the representations of the associated underlying set, writing E(d(s)) for selector s. Now we note that representations of some underlying sets are necessarily only recognisers (for example, the real numbers and all other uncountable infinite sets), while other underlying sets may be represented by either recognisers or generators. We express the effectiveness of a representation for the underlying sets in exactly the same way as before, writing E(d(s)) either as \{s\} or \{\phi, \{ s\}\} as appropriate. We will not consider the case where the underlying set is not effectively represented, with E(d(s)) = {}.
Cartesian products of the underlying sets would still be written as U(S), but now these mean something much smaller, since each selector is associated with a subset of universe of objects, U. This in turn affects the definitions of the relational operations, definitions 3.1 to 3.5.
The representations of the U(S) are obtained from the representations of the individual sets. Again the effectiveness of these representations would be expressed as a set of input sets, and thus the underlying sets can now appear in relational expressions. To convert all representations as near to generators as possible, we then simply combine relations with the cartesian product underlying them, using generalised intersection.
Ex. 6.1: we extend Ex. 4.6 to exploit the representation of the set underlying the selector x if this is possible. We have
\[ \text{SINE}' = \text{SINE} * U(\{x, sx\}) \]
If the set underlying x is the real numbers, then we necessarily have
\[ E_1(d(x)) = \{\{x\}\}, \]
a pure recogniser for the reals. Substituting this in the formula for the effectiveness of the representation of SINE’, we have
\[ E_1(\text{SINE}') = \{\{x\}, \{x, sx\}\}, \]
which is the effectiveness we had in Ex. 4.6. However, if the set underlying selector x is the rationals, represented by a generator, we find:
\[ E_2(d(x)) = \{\phi, \{x\}\}, \]
\[ E_2(\text{SINE}') = \{\phi, \{x\}, \{ss\}, \{x, sx\}\}. \]
The representation of the relation SINE’ has become a pure generator as a result of exploiting the representation of the underlying set.
Underlying sets can also be exploited within relational operations. This is important within generalised union, because, as we saw, if underlying sets are not exploited, then in general only a recogniser results. The general strategy is to combine the underlying sets into the expressions using generalised intersection. Let us look at the Ex. 5.2 again.
Ex. 6.2. Using the relations SINE and COSINE, as in Ex. 5.2, we now exploit the underlying sets. Given a value for x, we can now apply SINE to this x to obtain the sx values, and then use the representation of d(cx) to generate values for the remaining component, and we can apply COSINE to x and then fill in the third component using d(sx). The relational expression that we are in effect exploiting is
(U(cx)*SINE)+(U(sx)*COSINE)
which has effectiveness \{ \{x,\} \{x, sx\}, \{x, cx\}, \{x, sx, cx\}\}.
The fullest effectiveness that we can achieve is given by
U(x)*U(sx)*U(cx)*(SINE+COSINE)
which is a generator. This method for generating the tuples of the relation generates the tuples of the underlying cartesian product, and then uses the SINE+COSINE as a recogniser to select only those tuples that we want.
Note that distributing the U(si)'s inside the union gives us an effectiveness superficially like that we would have obtained had we exploited the underlying sets in the representations of the operands, but we do obtain extra power from using the underlying sets at the union. Clearly an important facet of the use of underlying sets is deciding just where to use their representations most effectively, without overdoing it and ending up with trivial and inefficient evaluation strategies.
Similar use of the underlying sets is possible for the other operations, and could be very useful in handling quantifications.
7. Conclusions and Open Problems
The theory that we have presented here forms the basis for the abstract syntax and semantics for a relational data base system. The extension of the language to include the usual arithmetic operations and comparisons within the concrete syntax are now trivial. For example, the relation SMALL of numbers less than 100 could be directly written as (x<100), while the relation which takes two values x and y to produce a third, z, which is their sum, could be written in the usual assignment notation, (z:=x+y). The effectiveness of these could be deduced directly from the surface syntax, and we might have, for example, the effectiveness of the adding relation as \{\{x, y\}, \{x, y, z\}\}. In this last, if we permitted the usual algebraic manipulations, a stronger effectiveness could be deduced.
Unfortunately the text doesn’t clearly spell out that x<100 might be given as the condition in the “selection” operator that became : in ISBL (mistyped as ; in PRTV), SELECT in BS12, and WHERE in Tutorial D. Nor does it go so far as to suggest that an assignment such as z:=x+y might be given in connection with what in 1975 would have been a new relational operator—the # that was added to ISBL in March, 1976. It became CALCULATE in BS12 and EXTEND in Tutorial D, and is generally referred to as extension now. It is interesting to note that even those few early textbooks that give any information on ISBL fail to mention its operator named # (references [17] and [18], for example).
Consider a Tutorial D expression of the general form
EXTEND r : { y := f(x) }
and let s be the result of its evaluation. If we remove the colon and braces from the second operand, we have the predicate expression y = f(x), which
could be taken in this context as denoting the relation that represents the extension of that predicate. In that case, the expression is equivalent to \( r \JOIN (y = f(x)) \). If we further take \( x \) as denoting an \( n \)-tuple \((n \geq 0)\), we can note that EXTEND requires the heading of \( r \) to be a superset of that of \( x \). The implied join operation is normally expected to be “lossless” (i.e., the cardinalities of \( r \) and \( s \) are equal), but in ISBL that requirement was dropped, such that where \( f(x) \) is undefined (as in division by zero, for example) for some tuple of \( r \), no corresponding tuple appears in \( s \).
Note that as well as being convenient, EXTEND automatically limits the user to write only what HHT calls effective expressions—ones that are guaranteed to yield finite relations even when an operand might be infinite. Although it also requires \( f(x) \) to be a function, that particular limitation has been noted as theoretically unnecessary. It could perhaps also allow the use of non-functional operators such as “square root of”, where a domain element can map to \( n \) range elements (where \( n \) is finite).
When \( f(x) \) itself is a predicate expression it can be used as a selection condition (in Tutorial D, \( r \WHERE f(x) \)) and again we can regard it as denoting the relation representing the extension of that predicate, to be joined with \( r \). Again, the heading of \( r \) has to be a superset of that of \( x \), so again we can say that WHERE limits the user to write only effective expressions.
In recognition of the fact that restriction and extension can both be regarded as special cases of join, ISBL eventually allowed “filters” to be included in invocations of \( \# \), its extension operator. Thus, an expression of the form “\( r \# y := f(x), y < 7, z = g(y) \)” was equivalent to “\( r \# y := f(x) : y < 7 \# z = g(y) \)”, where \( : \) is restriction. And as already noted, even \( y := f(x) \) could act as a filter in the case where \( f(x) \) is not defined for every tuple of \( r \).
We are, however, left with some problems. The generalised union suggests the use of semi-procedural representations, placing a ‘*’ in the positions where arbitrary elements of the underlying set would be placed. This then gives a finite representation of infinite sets. The symbol ‘*’ would play a ‘matches anything’ role in the manner that is traditional in computing practice, but here we appear to have a firm theoretical underpinning. This has yet to be explored.
The evaluation mechanism has been presented informally. This needs a precise formulation for example by using the techniques of Hitchcock [9]. This would enable both the derivation of effectiveness from more primitive notions, and the application of termination theorems such as those of Hitchcock and Park [10].
The characterisation of effectiveness is not the most powerful that we could use, and we could imagine partial representations which given a set of input values could compute some of, but not all of, the remaining components. We are developing a formalism to cope with this, basing it on the propositional calculus.
Details of the algorithms for obtaining the most effective representation for an expression have not been filled in. An associated problem concerned with the estimation of the cardinality of relations is still open.
8. Acknowledgements.
The train of investigations which led to this paper began with a Technical Note by Stephen Todd, concerning how to implement first order functions on relations within
the current experimental relational data base system at the IBM (UK) Scientific Centre, Peterlee, England. The current system is based upon the traditional relational algebra, and it soon became clear that a complete rethink of this approach was necessary. During the course of the investigations we had numerous discussions with colleagues at the Scientific Centre.
In particular, we acknowledge ideas concerning role names and how to model these which came from Ken Hanford (selectors in the spirit of ULD III [11], and Peter Quarendon (free variables and relations as predicates). We also thank John Owlett for discussions which helped clarify our thoughts.
9. References
The following are referenced in Hugh Darwen’s annotations:
14 The Third Manifesto and much related literature is available at http://www.thethirdmanifesto.com.
15. Hugh Darwen’s presentation slides and accompanying notes are available at https://www.northumbria.ac.uk/sd/academic/ee/work/research/computerscience/computational_intelligence/third_manifesto/.
|
{"Source-Url": "https://www.dcs.warwick.ac.uk/~hugh/TTM/HD-on-HHT1975.pdf", "len_cl100k_base": 11551, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 38758, "total-output-tokens": 13241, "length": "2e13", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0005512237548828125, "__label__crime_law": 0.00036835670471191406, "__label__education_jobs": 0.002410888671875, "__label__entertainment": 0.00012254714965820312, "__label__fashion_beauty": 0.00020802021026611328, "__label__finance_business": 0.00081634521484375, "__label__food_dining": 0.0005044937133789062, "__label__games": 0.0004968643188476562, "__label__hardware": 0.0013103485107421875, "__label__health": 0.0008215904235839844, "__label__history": 0.0004343986511230469, "__label__home_hobbies": 0.0002067089080810547, "__label__industrial": 0.0010004043579101562, "__label__literature": 0.0008320808410644531, "__label__politics": 0.0003478527069091797, "__label__religion": 0.0006055831909179688, "__label__science_tech": 0.295166015625, "__label__social_life": 0.00018894672393798828, "__label__software": 0.0164947509765625, "__label__software_dev": 0.67578125, "__label__sports_fitness": 0.0002200603485107422, "__label__transportation": 0.0007929801940917969, "__label__travel": 0.0002110004425048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50697, 0.01988]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50697, 0.40521]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50697, 0.92722]], "google_gemma-3-12b-it_contains_pii": [[0, 2633, false], [2633, 6048, null], [6048, 8899, null], [8899, 11545, null], [11545, 14219, null], [14219, 17630, null], [17630, 21119, null], [21119, 24300, null], [24300, 27126, null], [27126, 30555, null], [30555, 33947, null], [33947, 37279, null], [37279, 40608, null], [40608, 43803, null], [43803, 47402, null], [47402, 50047, null], [50047, 50697, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2633, true], [2633, 6048, null], [6048, 8899, null], [8899, 11545, null], [11545, 14219, null], [14219, 17630, null], [17630, 21119, null], [21119, 24300, null], [24300, 27126, null], [27126, 30555, null], [30555, 33947, null], [33947, 37279, null], [37279, 40608, null], [40608, 43803, null], [43803, 47402, null], [47402, 50047, null], [50047, 50697, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50697, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50697, null]], "pdf_page_numbers": [[0, 2633, 1], [2633, 6048, 2], [6048, 8899, 3], [8899, 11545, 4], [11545, 14219, 5], [14219, 17630, 6], [17630, 21119, 7], [21119, 24300, 8], [24300, 27126, 9], [27126, 30555, 10], [30555, 33947, 11], [33947, 37279, 12], [37279, 40608, 13], [40608, 43803, 14], [43803, 47402, 15], [47402, 50047, 16], [50047, 50697, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50697, 0.03158]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
88203fa1ff5f8245a8ae523b7e1fd6854f082cd5
|
Code Size and Accuracy-Aware Synthesis of Fixed-Point Programs for Matrix Multiplication
Matthieu Martel, Mohamed Amine Najahi, Guillaume Revy
To cite this version:
HAL Id: lirmm-00860383
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00860383
Submitted on 30 Sep 2015
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Code Size and Accuracy-Aware Synthesis of Fixed-Point Programs for Matrix Multiplication
Matthieu Martel\textsuperscript{1,2,3}, Amine Najahi\textsuperscript{1,2,3} and Guillaume Revy\textsuperscript{1,2,3}
\textsuperscript{1}Univ. Perpignan Via Domitia, DALL, F-66860, Perpignan, France
\textsuperscript{2}Univ. Montpellier II, LIRMM, UMR 5506, F-34095, Montpellier, France
\textsuperscript{3}CNRS, LIRMM, UMR 5506, F-34095, Montpellier, France
\{matthieu.martel, amine.najahi, guillaume.revy\}@univ-perp.fr
Keywords: Automated code synthesis, Matrix multiplication, Fixed-point arithmetic, Certified numerical accuracy.
Abstract: In digital signal processing, many primitives boil down to a matrix multiplication. In order to enable savings in time, energy consumption, and on-chip area, these primitives are often implemented in fixed-point arithmetic. Various conflicting goals undermine the process of writing fixed-point codes, such as numerical accuracy, run-time latency, and size of the codes. In this article, we introduce a new methodology to automate the synthesis of small and accurate codes for matrix multiplication in fixed-point arithmetic. Our approach relies on a heuristic to merge matrix rows or columns in order to reduce the synthesized code size, while guaranteeing a target accuracy. We suggest a merging strategy based on finding closest pairs of vectors, which makes it possible to address in a few seconds problems such as the synthesis of small and accurate codes for size-64 and more matrix multiplication. Finally, we illustrate its efficiency on a set of benchmarks, and we show that it allows to reduce the synthesized code size by more than 50\% while maintaining good numerical properties.
1 INTRODUCTION
Embedded systems are usually dedicated to one or few tasks that are often highly demanding on computational resources. Examples of computational applications widely deployed on these targets include discrete transforms (DCT, FFT, and digital filters) as well as image processing. Since floating-point implementations are costly in hardware resources, embedded system programmers prefer the fixed-point arithmetic (Yates, 2009). Indeed, the latter requires no specific hardware, relies on integer arithmetic only, and is highly efficient in terms of execution speed and energy consumption. Besides, in such systems, the accuracy is often critical, and there is a real need for the design of numerically certified basic blocks.
However, there are currently two hurdles to the widespread use of fixed-point arithmetic. First, fixed-point programming is a tedious process that requires a high degree of expertise since the programmer is in charge of such arithmetical details as alignments and overflow prevention. Second, the low dynamic range of fixed-point numbers compared to floating-point numbers led to a persistent belief that fixed-point computations are inherently unsafe and should be confined to uncritical applications. For all of these reasons, the fixed-point arithmetic has been for long limited to small and simple enough problems. In this article, our goal is to overcome these limitations for the case of matrix multiplication which is a widely deployed basic block in embedded applications.
For this purpose, we suggest and implement a novel methodology to automate the synthesis of tight and accurate codes for matrix multiplication that lends itself naturally to matrix-vector multiplication. In fact, a classical $n \times n$ matrix multiplication requires $n^2$ dot-products. For the sake of accuracy, each dot-product may rely on a particular optimized fixed-point code, leading to large code sizes. The challenge is therefore to reduce the number of synthesized dot-products without harming the overall accuracy of matrix multiplication. Our contribution is a novel strategy to carefully select and merge close enough rows and columns of the input matrices in order to reduce the number of synthesized dot-products, while guaranteeing a certain accuracy bound. This methodology is implemented in an automated tool and allows to quickly and easily treat problems previously considered intractable or unsuitable for this arithmetic. For instance, writing a code for the accurate multiplication of size-64 matrices is almost impossible to achieve by hand since it involves various trade-
offs between code size, runtime latency, and accuracy. Moreover, to increase the level of confidence in the synthesized codes, our tool uses an analytic methodology based on interval arithmetic (Moore et al., 2009) to compute strict bounds on the roundoff errors and to guarantee the absence of overflow. With each synthesized code, it provides an accuracy certificate that bounds the accuracy errors due to finite wordlength effects, and that can be checked using the formal verification tool Gappa\(^1\) (Melquiond, 2006).
Although many work on linear algebra routines in fixed-point arithmetic exists, to our knowledge, this article is the first one where certified fixed-point techniques are applied to such large problems, as size-64 matrix multiplication. Indeed (Nikolic et al., 2007) deals with the transformation from floating-point to fixed-point of matrix decomposition algorithms for DSPs and (Golub and Mitchell, 1998) with the implementation of matrix factorization algorithms for the particular C6x VLIW processor, while (Mehlhose and Schiffermüller, 2009) and (Irturk et al., 2010) discuss matrix inversion for the C64x+ DSP core and FPGAs, respectively. For the matrix multiplication, (Syed M. Qasim, 2010) present a hardware implementation of a matrix multiplier optimized for a Virtex4 FPGA, which mainly relies on a large matrix-vector block to handle large matrices. Yet another FPGA architecture is presented in (Sotiropoulos and Papafostathion, 2009), that uses parallel DSP units and multiplies sub-matrices, whose size has been optimized so as to fully exploit the resources of the underlying architecture. In (Campbell and Khatri, 2006) a delay and resource efficient methodology is introduced to implement a FPGA architecture for matrix multiplication in integer/fixed-point arithmetic. These works suggest a variety of techniques to determine and optimize fixed-point wordlengths. However, the roundoff error bounds in their methodologies are computed \textit{a posteriori} by simulating the fixed-point design and comparing its output to that of a floating-point design. Also when determining fixed-point formats of intermediate variables, most of these works use a technique introduced by Sung et al. (Kim et al., 1998). This technique consists in using floating-point simulation to estimate the range of intermediate computations and convert the program to fixed-point arithmetic. In (Nikolic et al., 2007), the Sung technique is used to suggest a number of linear algebra routines. This simulation based technique has two drawbacks: 1. Its duration is exponentially proportional to the number of input variables. It is therefore impractical for large problems. 2. It provides no strict guarantees that intermediate computations will not overflow. On the other hand, the authors of (Lee et al., 2006) use a more rigorous approach to generate fixed-point codes for various problems. But we are not aware of any automated implementation and their examples do not supersede the size-8 DCT.
This article is organized as follows: After some background on fixed-point arithmetic in Section 2, Section 3 discusses two straightforward approaches for synthesizing matrix multiplication programs. Then our new strategy to synthesize codes that satisfy certain tradeoff goals is presented in Section 4. Finally Section 5 is dedicated to some experimental benchmarks, before a conclusion in Section 6.
2 BACKGROUND ON FIXED-POINT ARITHMETIC
This section presents our fixed-point arithmetic model, including an error model, and it discusses the numerical issues arising from the computation of dot-products in this arithmetic.
2.1 Fixed-Point Arithmetic Model
Fixed-point number. Fixed-point arithmetic allows one to represent a real value by means of an integer associated to an \textit{implicit} scaling factor. Let \(X\) be a \(k\)-bit signed integer in radix 2, encoded using two's complement notation. Combined with a factor \(f \in \mathbb{Z}\), it represents the real value \(x\) defined as follows:
\[ x = X \cdot 2^{-f}. \]
In the sequel of this article, \(Q_{i,f}\) denotes the format of a given fixed-point value represented using a \(k\)-bit integer associated with a scaling factor \(f\), with \(k = i + f\), as shown in Figure 1.
Hence a fixed-point variable \(v\) in \(Q_{i,f}\) is such that:
\[ v \in \{ V \cdot 2^{-f} \} \quad \text{with} \quad \begin{cases} V \in \mathbb{Z} \cap [-2^{k-1}, 2^{k-1} - 1], \\ f \end{cases} \quad \text{by step of } 2^{-f}. \]
\(1\)See http://gappa.gforge.inria.fr.
\begin{figure}[h]
\centering
\begin{tabular}{cccccccc}
\hline
\(i = 3\) & \(f = 5\) & \(X_7\) & \(X_6\) & \(X_5\) & \(X_4\) & \(X_3\) & \(X_2\) & \(X_1\) & \(X_0\) \\
\hline
\end{tabular}
\caption{Fixed-point number in \(Q_{3,5}\) on \(k = 8\) bits.}
\end{figure}
Set of fixed-point intervals. In practice, a fixed-point variable $v$ may lie in a smaller range than the one defined in Equation (1). For instance, if $V \in \mathbb{Z} \cap [-2^{k-1} + 2^{k-2}, 2^{k-1} - 2^{k-2}]$ in Equation (1), then $v$ is still in the $Q_f$ format but with additional constraints on the runtime values it can take. For this reason, in this article, we denote by Fix the set of fixed-point intervals, where each element has a fixed format and an interval that narrows its runtime values.
Notice that unlike the exponent of floating-point variables, the scaling factor of fixed-point variables is fixed and is not encoded into the program. It is known only by the programmer, who is in charge of all the arithmetical details. For example, when adding two fixed-point values, both operand points have first to be aligned, that is, operands have to be set in the same fixed-point format. This alignment may lead to a potential roundoff error in finite precision.
2.2 Error Model in Fixed-Point Arithmetic
Let $v$, $v_x$, and $v_r$ be three fixed-point variables in the formats $Q_{f_l}$, $Q_{f_r}$, and $Q_{f_s}$, respectively, and $\diamond$ be an operation in $\{+, -, \times\}$:
$$V = v_x \diamond v_r.$$
For the sake of conciseness, we do not deal here with the determination of fixed-point formats, but it can be found in (Yates, 2009) or (Mouilleron et al., 2013). Let us rather detail how we compute an interval $\text{error}(v)$ enclosing the error entailed by the evaluation of $v$, where $\text{value}(v)$ is an interval enclosing the value of $v$.
- In absence of overflow, addition and subtraction are error-free. Hence, for $\diamond \in \{+, -\}$ we have:
$$\text{error}(v) = \text{error}(v_x) \diamond \text{error}(v_r).$$
- If the operation is a multiplication, we have:
$$\text{error}(v) = \text{error}_x + \text{error}(v_x) \cdot \text{error}(v_r) + \text{error}(v_x) \cdot \text{value}(v_r) + \text{value}(v_x) \cdot \text{error}(v_r),$$
where $\text{error}_x$ is the error entailed by the multiplication itself. Usually in fixed-point arithmetic this error is due to the truncation of the exact result of the multiplication to fit in a smaller format with $f$ fraction bits. Hence we have:
$$\text{error}_x = [2^{-(f_r + f_s)} - 2^{-f}, 0].$$
Most 32-bits DSP processors provide a $32 \times 32$ multiplier that returns the 32 most significant bits of the exact result, which is the multiplier considered in this work.
Left shift entails no error but only a possible overflow. However right shift may also be followed by the truncation of the exact result to fit in a smaller format with $f$ fraction bits. Hence the evaluation of $v = v_r >> r$ entails an error defined as follows:
$$\text{error}(v) = \text{error}(v_r) + \text{error}_{>>}$$
with in practice:
$$\text{error}_{>>} = [2^{-f_r} - 2^{-f}, 0] \quad \text{and} \quad f = f_r - r.$$
2.3 Synthesis of Dot-Product Code in Fixed-Point Arithmetic
The code synthesized by our tool for matrix multiplication relies on dot-product evaluations as basic blocks. Besides the arithmetic model, the computation order of summations when evaluating dot-products has a great impact on the accuracy of the resulting codes. This is a well-known issue in floating-point arithmetic (Ogita et al., 2005), (Rump, 2009) but the same holds for fixed-point arithmetic. Precisely, summing $n$ variables can be done using $\prod_{i=1}^{n-1} (2i + 1)$ different ways. For example, there exists 945 different schemes to implement a size-6 dot-product. Due to this huge combinatorics, there is a need for heuristics to address the issue of synthesizing accurate dot-product codes.
Our approach relies on the use of the CGPE library (Mouilleron and Revy, 2011). Introduced by Revy et al., it was initially designed to synthesize codes for polynomial evaluation, and it has been so far extended to summation and dot-product. CGPE implements the previous arithmetic model and embraces some heuristics to produce fast and numerically certified codes, relying on $k$-bit integer arithmetic only. Typically $k \in \{16, 32, 64\}$ and, for the rest of the article, we consider $k = 32$. CGPE takes as input an interval of fixed-point values for each coefficient and variable, a maximum error bound allowed for the code evaluation, and some architectural constraints. Then it outputs codes exposing a high degree of instruction-level parallelism and for which we are able to bound the evaluation error.
For dot-products, CGPE takes two size-$n$ vectors $V_1$ and $V_2$ whose elements belong to the vectors of intervals
$$V_1, V_2 \in \text{Fix}^n,$$
with $\text{Fix}$ the set of fixed-point intervals defined in Section 2.1. It computes fast and numerically certified codes to implement $V_1 \cdot V_2$. In the sequel of the article, we will refer to the routine $\text{DPSynthesis}(V_1, V_2)$ to compute automatically these codes.
---
2See http://oeis.org/A001147.
3See http://cgpe.gforge.inria.fr.
3 STRAIGHTFORWARD APPROACHES FOR THE SYNTHESIS OF MATRIX MULTIPLICATION PROGRAMS
Let $A$ and $B$ be two fixed-point interval matrices of size $m \times n$ and $n \times p$, respectively:
$$A \in \mathbb{Fix}^{m \times n} \text{ and } B \in \mathbb{Fix}^{n \times p}.$$
In the article, we denote by $A_{i,:}$ and $A_{:,j}$ the $i^{th}$ row and $j^{th}$ column of $A$, respectively, and $A_{i,j}$ the element of the $i^{th}$ row and $j^{th}$ column of $A$.
We discuss now two straightforward approaches for the synthesis of a fixed-point program to multiply $A'$ and $B'$, where $A'$ and $B'$ are two matrices that belong to $A$ and $B$, that is, where each element $A'_{i,k}$ and $B'_{k,j}$ belongs to the intervals $A_{i,k}$ and $B_{k,j}$, respectively. This consists in writing a program for computing $C = A \cdot B$, where $C \in \mathbb{Fix}^{m \times p}$. Therefore, $\forall i, j \in \{1, \ldots, m\} \times \{1, \ldots, n\}$, we have:
$$C_{i,j} = A_{i,:} \cdot B_{:,j} = \sum_{k=1}^{n} A_{i,k} \cdot B_{k,j}, \quad (2)$$
At the end of the section, we give some code size and accuracy estimates for each approach.
3.1 Accurate and Compact Approaches
Following Equation (2), a first straightforward approach to write code for matrix multiplication consists in synthesizing a program for each dot-product $A_{i,:} \cdot B_{:,j}$. Algorithm 1 below implements this approach, where $\text{DPSynthesis}(A_{i,:}, B_{:,j})$ produces a fast and numerically certified code for the computation of $A_{i,:} \cdot B_{:,j}$. Remark that Algorithm 1 issues $m \times p$ requests to the $\text{DPSynthesis}$ routine. At runtime, only one call to each generated code will be issued, for a total of $m \times p$ calls.
To significantly reduce the size of the whole program, the computed codes could be refactored to evaluate more than one dot-product. Algorithm 2 below, whose input and output are the same as Algorithm 1, pushes this idea to the limits by merging element by element the matrices $A$ and $B$ into a unique row $\mathcal{U}$ and column $\mathcal{V}$, respectively. Merging two fixed-point intervals means computing their union. Using this approach, it issues a unique call to $\text{DPSynthesis}$ at synthesis. But, at runtime, $m \times p$ calls to the synthesized code are still needed to evaluate the matrix product.
Let us now illustrate the differences between these two algorithms by considering the problem of generating code for the product of the following two fixed-point interval matrices:
$$A = \begin{bmatrix} -1000, 1000 \quad -3000, 3000 \\ -1, 1 \quad -1, 1 \end{bmatrix}$$
and
$$B = \begin{bmatrix} -2000, 2000 \quad -2, 2 \\ -4000, 4000 \quad -10, 10 \end{bmatrix},$$
where $A_{1,1}$ and $B_{1,1}$ are in the format $Q_{0,23}$, $A_{1,2}$ in $Q_{2,20}$, $A_{2,1}$, $A_{2,2}$, $B_{2,1}$ in $Q_{3,20}$, $B_{1,2}$ in $Q_{2,29}$, and $B_{2,2}$ in $Q_{27}$. Algorithm 1 produces 4 distinct codes, denoted by $\text{DPCode}_{1,1}$, $\text{DPCode}_{1,2}$, $\text{DPCode}_{2,1}$, and $\text{DPCode}_{2,2}$. On the other hand, Algorithm 2 first computes $\mathcal{U}$ and $\mathcal{V}$ as follows:
$$\mathcal{U} = A_{1,:} \cup A_{2,:} = ([-1000, 1000] \cup [-3000, 3000])$$
and
$$\mathcal{V} = B_{1,:} \cup B_{2,:} = ([-2000, 2000] \cup [-4000, 4000]).$$
Then, $\text{DPCode}_{\mathcal{U}, \mathcal{V}}$ is generated that evaluates the dot-product of $\mathcal{U}$ and $\mathcal{V}$. Table 1 summarizes the properties of the codes produced by both algorithms on the example.
Algorithm 1 Accurate algorithm.
**Input:**
Two matrices $A \in \mathbb{Fix}^{m \times n}$ and $B \in \mathbb{Fix}^{n \times p}$
**Output:**
Code to compute the product $A \cdot B$
**Algorithm:**
1: for $i \leq i \leq m$
2: \hspace{1em} for $j \leq j \leq p$
3: \hspace{2em} $\text{DPSynthesis}(A_{i,:}, B_{:,j})$
4: end for
5: end for
Algorithm 2 Compact algorithm.
**Algorithm:**
1: $\mathcal{U} \leftarrow A_{1,:} \cup A_{2,:} \cup \cdots \cup A_{m,:}$, with $\mathcal{U} \in \mathbb{Fix}^{1 \times n}$
2: $\mathcal{V} \leftarrow B_{1,:} \cup B_{2,:} \cup \cdots \cup B_{p,:}$, with $\mathcal{V} \in \mathbb{Fix}^{m \times 1}$
3: $\text{DPSynthesis}(\mathcal{U}, \mathcal{V})$
produce codes with guaranteed error bounds would be useless if the generated code size is excessively large. In this article, we go further than Algorithms 1 and 2, and explore the possible means to achieve tradeoffs between the two conflicting goals.
### 3.2 Code Size and Accuracy Estimates
The dot-product is the basic block of classical matrix multiplication. In a size-$n$ dot-product, regardless of the evaluation scheme used, $n$ multiplications and $n-1$ additions are performed, since it can be reduced to a size-$n$ summation. Also, depending on the formats of their operands, additions frequently require alignment shifts. Thus the number of shifts is bounded by $2n$ and, in absence of overflow, it falls to $n$. Hence $4n-1$ is a worst case bound on the number of elementary operations (additions, multiplications, and shifts) needed to evaluate a size-$n$ dot-product. Globally, $(4n-1)\cdot t$ is a bound on the total size of a matrix multiplication code, where $t \in \{1, \cdots, m \times p\}$ is the number of generated dot-product codes. On the previous example, this bound evaluates to 28 for Algorithm 1 vs. 7 for Algorithm 2, since $n = 2$, and $t = 4$ and 1, respectively.
As for accuracy estimates, given a matrix multiplication program composed of several DPCodes, the maximum of all error bounds can be considered as a measure of the numerical quality. However, examples can be found where this measure does not reflect the numerical accuracy of all the codes. Consider for instance the example of Section 3.1, where both algorithms generate codes that have the same maximum error bound ($\approx 2^{-5}$), yet 3 of the 4 DPCodes generated by Algorithm 1 are by far more accurate than this bound. For this reason, we may also rely on the average error since we consider it as a more faithful criterion to estimate the accuracy.
### 4 DYNAMIC CLOSEST PAIR
#### ALGORITHM FOR CODE SIZE VS. ACCURACY TRADEOFFS
In this section, we discuss how to achieve code size vs. accuracy tradeoffs, and the related combinatorics. We finally detail our new approach based on rows and columns merging and implemented in the Dynamic Closest Pair algorithm.
#### 4.1 How to Achieve Tradeoffs
On the first hand, when developing a numerical application for embedded systems, the amount of program memory available imposes an upper bound on the code size. On the other hand, the nature of the application and its environment help in deciding on the required degree of accuracy.
Once these parameters set, the programmer tries Algorithms 1 and 2. Either the accurate algorithm (Algorithm 1) is not accurate enough: a solution we do not discuss here consists in adapting the fixed-point computation wordlengths to reach the required accuracy, as in (Lee and Villasenor, 2009). Or the compact algorithm (Algorithm 2) does not satisfy the code size constraint: other solutions must be considered such as adding more hardware resources. Finally, the only uncertainty that remains is when Algorithm 1 satisfies the accuracy constraint but has a large code size while Algorithm 2 satisfies the code size bound but is not accurate enough. This case appeals for code size vs. accuracy tradeoffs.
Recall that $m \times p$ dot-product calls are required at runtime. To evaluate them using less than $m \times p$ DPCodes, it is necessary to refactor some DPCodes so that they would evaluate more than one runtime dot-product. This amounts to merging certain rows and/or columns of the input matrices together. Obviously, it is useless to go as far as compressing the left and right matrices into one row and column, respectively, since this corresponds to Algorithm 2. Our idea is illustrated by Figure 2 on a $4 \times 4$ matrix multiplication. In this example, each matrix is compressed.
Let $A$ and $B$ be two fixed-point interval matrices of size $m \times n$ and $n \times p$, respectively:
$$A \in \mathbb{Fix}^{m \times n} \quad \text{and} \quad B \in \mathbb{Fix}^{n \times p},$$
and their two associated sets of vectors
$$S_A = \{A_{1:} , \cdots , A_{m:}\} \quad \text{and} \quad S_B = \{B_{1:} , \cdots , B_{p:}\}.$$
In our case, the problem of finding an interesting code size vs. accuracy tradeoff reduces to finding partitions of the sets $S_A$ and $S_B$ into $k_A \leq m$ and $k_B \leq p$ subsets, respectively, such that both of the following conditions hold:
1. the code size bound $\sigma$ is satisfied, that is:
$$4n - 1 \cdot k_A \cdot k_B < \sigma,$$
2. and the error bound $\varepsilon$ is guaranteed, that is:
$$\varepsilon_{\text{error}} < \varepsilon,$$
where $\varepsilon_{\text{error}}$ is either the minimal, the maximal, or the average computation error depending on the certification level required by the user.
Remark that, given the partitions of $S_A$ and $S_B$, the first condition is easy to check. However in order to guarantee the error condition, we must compute the error bound using CGPE.
A benefit of formulating the refactoring strategy in terms of partitioning is the ability to give an upper bound on the number of possible dot-product mergings. Indeed, given a non-empty set $S$ of $k$ vectors, the number of different ways to partition $S$ into $k' \leq k$ non-empty subsets of vectors is given by the Stirling number of the second kind $\{k \atop k'\}$, defined as follows:
$$\{k \atop k'\} = \frac{1}{k'^!} \sum_{j=0}^{k'} (-1)^{k'-j} \frac{k'^!}{j!(k'-j)!} j^k.$$
However, $k'$ is a priori unknown and can be $\in \{1, \cdots , k\}$. The total number of possible partitions of a set of $k$ vectors is therefore given by the following sum, commonly referred to as the Bell number:
$$B(k) = \sum_{k' = 1}^{k} \{k \atop k'\}.$$
Finally, in our case, the total number of partitionings is defined as follows:
$$\mathcal{P}(m,p) = B(m) \cdot B(p) - 2,$$
where $m \times p$ is the size of the resulting matrix. Notice that we exclude two partitions:
1. The partition of $S_A$ and $S_B$ into respectively $m$ and $p$ subsets which correspond to putting one and only one vector in each subset. This is the partitioning that leads to Algorithm 1.
2. The partition of $S_A$ and $S_B$ into one subset each. This partitioning leads to Algorithm 2.
Observe that even for small matrix sizes, the number $\mathcal{P}$ in Equation (3) is huge: $\mathcal{P}(5,5) = 2702$, $\mathcal{P}(10,10) \geq 2^{33}$, and $\mathcal{P}(64,64) \geq 2^{143}$. Therefore, aggressive heuristics will be necessary to tackle this problem. In the following, we introduce a method based on finding closest pairs of vectors according to a certain metric. This allows to find partitions that achieve the required tradeoff.
### 4.3 Dynamic Closest Pair Algorithm
Merging two fixed-point interval vectors $\mathcal{U}$ and $\mathcal{V}$ component-wise yields a vector whose ranges are larger than those of $\mathcal{U}$ and $\mathcal{V}$. This eventually leads to a degradation of the accuracy if the resulting vector is used to generate some DPCodes. In the extreme, this is illustrated by Algorithm 2 in Section 3.1. Therefore the underlying idea of our approach is that of putting together, in the same subset, row or column vectors that are close according to a given distance or criterion. Hence we ensure a reduction in code size while maintaining tight fixed-point formats, and thus guaranteeing a tight error bound.
---
1See http://oeis.org/A008277.
2See http://oeis.org/A000110.
Many metrics can be used to compute the distance between two vectors. Below, we cite two mathematically rigorous distances: the Hausdorff and the fixed-point distances. However, as our method does not use the mathematical properties of distances, any criterion that may discriminate between pairs of vectors may be used. For instance, although not a distance, the width criterion introduced below was used in our experiments.
**Hausdorff distance.** A fixed-point interval variable corresponds to a rational discrete interval. It follows that the Hausdorff distance (Moore et al., 2009), widely used as a metric in interval arithmetic, can be applied to fixed-point interval variables. Given two fixed-point intervals $I_1$ and $I_2$, this distance $d_H(I_1, I_2)$ is defined as follows:
\[
d_H : \text{Fix} \times \text{Fix} \to \mathbb{R}^+
\]
\[
d_H(I_1, I_2) = \max \{ |I_1 - I_2|, |\bar{I}_1 - \bar{I}_2| \},
\]
where $I_1$ and $\bar{I}_1$ stand for the lower and upper bound of the interval $I_1$, respectively. Roughly, this distance computes the maximum increase suffered by $I_1$ and $I_2$ when computing their union $I_1 \cup I_2$, as illustrated on Figure 3(a).
**Fixed-point distance.** Contrarily to Hausdorff’s distance which reasons on the intervals defined by the fixed-point variables, the fixed-point distance uses only their fixed-point formats. As such, it is slightly faster to compute. Given two fixed-point intervals $I_1$ and $I_2$, this distance $d_F(I_1, I_2)$ is defined as follows:
\[
d_F : \text{Fix} \times \text{Fix} \to \mathbb{N}
\]
\[
d_F(I_1, I_2) = |\text{IntegerPart}(I_1) - \text{IntegerPart}(I_2)|.
\]
Analogously to Hausdorff distance, this distance computes the increase in the integer part suffered by $I_1$ and $I_2$ when computing their union $I_1 \cup I_2$.
**Width criterion.** Given two fixed-point intervals $I_1$ and $I_2$, our third metric computes the width of the interval resulting from the union $I_1 \cup I_2$, as illustrated on Figure 3(b). Formally, it is defined as follows:
\[
d_W : \text{Fix} \times \text{Fix} \to \mathbb{R}^+
\]
\[
d_W(I_1, I_2) = (\bar{I}_1 \cup \bar{I}_2 - \bar{I}_1 \cup \bar{I}_2).
\]
Notice that although the metrics are introduced as functions of two fixed-point intervals, we generalized them to fixed-point interval vectors by considering either the component-wise max or average value.
Given one of the above metrics and a set $S$ of vectors, we are able to implement the `findClosestPair` routine that returns the closest pair of vectors in $S$. There are several ways to implement such a routine. A $O(n^2)$ naive approach would compare all the possible pairs of vectors. But, depending on the distance used, optimized implementations may rely on the well established fast closest pair of points algorithms (Shamos and Hoey, 1975), (Cormen et al., 2009, §33). Nevertheless, our contribution lies mainly in the design of Algorithm 3 which is based on a dynamic search of a code that satisfies an accuracy bound $C_1$ and code size bound $C_2$.
Here, we assume that Algorithm 1 satisfies the accuracy bound $C_1$, otherwise, no smaller code satisfying $C_1$ could be found. Therefore, Algorithm 3
---
**Algorithm 3 Dynamic Closest Pair algorithm.**
**Input:**
Two matrices $A \in \mathbb{F}x^{m \times n}$ and $B \in \mathbb{F}x^{n \times p}$
An accuracy bound $C_1$ (ex. average error bound is $< \varepsilon$)
A code size bound $C_2$
**Output:**
Code to compute $A \cdot B$ s.t. $C_1$ and $C_2$ are satisfied, or no code otherwise
**Algorithm:**
1: $S_1 \leftarrow \{A_1, \ldots, A_n\}$
2: $S_2 \leftarrow \{B_1, \ldots, B_p\}$
3: while $C_1$ is satisfied do
4: $(u_1, v_1), d_A \leftarrow \text{findClosestPair}(S_A, d)$
5: $(u_2, v_2), d_B \leftarrow \text{findClosestPair}(S_B, d)$
6: if $d_A \leq d_B$ then
7: remove $(u_1, v_1, S_A)$
8: insert $(u_2 \cup v_1, S_B)$
9: else
10: remove $(u_2, v_2, S_B)$
11: insert $(u_1 \cup v_2, S_A)$
12: end if
13: for $(A_i, B_j) \in S_A \times S_B$ do
14: $\text{DP} \text{Synthesis}(A_i, B_j)$
15: end for
16: end while
17: /* Revert the last merging step. */
18: /* Check the bound $C_2$. */
starts with two sets of $m$ and $p$ vectors respectively, corresponding to the rows of $A$ and the columns of $B$. As long as the bound $C_1$ is satisfied, each step of the while loop merges together the closest pair of rows or columns, and thus decrements the total number of vectors by 1. At the end of Algorithm 3, if the size of the generated code satisfies the code size bound $C_2$, a tradeoff solution has been found. Otherwise, Algorithm 3 failed to find a code that satisfies both bounds $C_1$ and $C_2$. This algorithm was implemented in the FPLA tool,\textsuperscript{6} that relies on CGPE and work is in progress to enhance it with more linear algebra routines. Section 5 studies the efficiency of this algorithm on a variety of fixed-point benchmarks.
5 NUMERICAL EXPERIMENTS
In this section, we illustrate the efficiency of our heuristics, and the behaviour of Algorithm 3 as well as the impact of the distance and the matrix size through a set of numerical results.
5.1 Experimental Environment
Experiments have been carried out on an Intel Xeon Quad Core 3.1 GHz running the GNU/Linux environment. In our experiments, we used 3 structured and 1 unstructured benchmarks. For structured benchmarks, the large coefficient distributions throughout the matrices follow different patterns. This is achieved through weight matrices, as shown in Table 2 where $W_{ij}$ corresponds to the element of row $i$ and column $j$ of the considered weight matrix.
Notice, that the dynamic range defined as $\max(W_{i,j})/\min(W_{i,j})$ is the same for all benchmarks, and is equal to $2^{n/2}$. The reason we did not directly use these matrices in our experiments is that the first 3 patterns correspond to structured matrices in the usual sense and that better algorithms to multiply structured matrices exist (Mouilleron, 2011). To obtain random matrices where the large coefficients are still distributed according to the pattern described by the weight matrices, we computed the Hadamard product of Table 2 matrices with normally distributed matrices generated using Matlab’s \texttt{randn} function. Finally, notice that the matrices obtained this way have floating-point coefficients. In order to get fixed-point matrices, we first converted them to interval matrices by considering the radius 1 intervals centered at each coefficient. Next, the floating-point intervals are converted into fixed-point variables by considering the smallest fixed-point format that holds all the interval’s values.
5.2 Efficiency of the Distance Based Heuristic
As a first experiment, let us consider 2 of the previous benchmarks: centered and random square matrices of size 6. For each, we build two matrices $A$ and $B$ and observe the efficiency of our closest pair heuristic based approach by comparing the result of Algorithm 3 to all the possible codes. To do so, we compute all the possible row or column mergings: from Equation (3) and including the two degenerated cases, for size-6 matrices, there are 41209 such mergings. For each of these, we synthesize the codes for computing $A \cdot B$, and determine the average error. This exhaustive experiment took approximately 2h15min per benchmark. Figure 4 shows the average error of the produced codes according to the number of DPCodes involved. Next we ran our tool using Hausdorff’s distance to observe the behavior of Algorithm 3 and recorded all the intermediate steps. This took less than 10s for each benchmark and corresponds to the dark blue dots in Figure 4. Notice on both sides the accurate algorithm which produces 36 dot-products and the compact algorithm which produces only 1 dot-product. Notice also that Algorithm 3 is an iterative and deterministic algorithm. Once it goes in a wrong branch of the result space, this may lead to a code having an average error slightly larger than the best case. This can be observed on Figure 4(b): the first 6 steps produce code with very tight average error, but step 7 results in a code with an average error of $\approx 10^{-3}$ while the best code has an error of $\approx 5 \cdot 10^{-4}$. As a consequence, the following of the algorithm gives a code with an error of $\approx 3 \cdot 10^{-3}$ instead of $\approx 10^{-3}$ for the best case.
Despite this, these experiments show the interest of our heuristic approach. Indeed we may observe that, at each step, the heuristic merges together 2 rows of $A$ or 2 columns of $B$ to produce a code having in most cases an average error close to the best case.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Name} & \textbf{$W_{ij}$} & \textbf{Heat map} \\
\hline\hline
Center & $2^{\max(i,n-1-(n-1-j)-\lfloor n/2\rfloor)}$ & \includegraphics[width=1cm]{center_heatmap.png} \\
Edges & $2^{\min(i,n-1-(n-1-j))}$ & \includegraphics[width=1cm]{edges_heatmap.png} \\
Rows / Columns & $2^{[i/2]}$ & \includegraphics[width=1cm]{rows_columns_heatmap.png} \\
Random & $2^{\text{rand}(0,n/2-1)}$ & \includegraphics[width=1cm]{random_heatmap.png} \\
\hline
\end{tabular}
\caption{Weight matrices considered for the benchmarks.}
\end{table}
\textsuperscript{6}FPLA: Fixed-Point Linear Algebra.
This is particularly the case on Figure 4(a) for random benchmarks. Moreover, Algorithm 3 converges toward code having good numerical quality much faster than the exhaustive approach.
5.3 Impact of the Metric on the Tradeoff Strategy
In this second experiment, we consider 25 × 25 matrices. For each benchmark introduced above, 50 different matrix products are generated, and the results exhibited are computed as the average on these 50 products. To compare the different distances, we consider the average accuracy bound: for each metric, we varied this bound and used Algorithm 3 to obtain the most compact codes that satisfy it. Here we ignored the code size bound $C_2$ by setting it to a large enough value. Also, in order to show the efficiency of the closest pair strategy, we compare the codes generated using Algorithm 3 with those of an algorithm where the merging of rows and columns is carried out randomly. Figure 5 shows the results of running FPLA.
First notice that, as expected, large accuracy bounds yield the most compact codes. For instance, for all the benchmarks, no matter the distance used, if the target average accuracy is $> 2^{-9.5}$, one DPCode suffices to evaluate the matrix multiplication. This indeed amounts to using Algorithm 2. Also as expected and except for few values, when used with one of the distances above, our algorithm produces less DPCodes than with the random function as a distance. Using the average width criterion, our algorithm satisfies it with only 58 DPCodes, while the random algorithm needs 234 DPCodes. This yields a code size reduction of up to 75%. Notice also that globally, the center benchmark is the most accurate. This is due to the fact that few rows/columns have a high dynamic range. On Figures 5(b) and 5(d), in the edges as well as random benchmarks, all of the rows and columns have a high dynamic range which explains in part why these benchmarks are less accurate than the center benchmark. These experiments also suggest that average based distances yield tighter code than max based ones.
5.4 Impact of the Matrix Size
In this third experiment, we study the influence of the matrix sizes on the methodology presented above. To do so, we consider square matrices of the center benchmark with sizes 8, 16, 32, and 64, where each element has been scaled so as these matrices have the same dynamic range. We run Algorithm 3 using average width criterion with different average error bounds from $2^{-21}$ to $2^{-14}$. Here the bound $C_2$ has also been ignored. For each of these benchmarks, we determine the number of DPCodes used for each average error, as shown in Table 3 (where “−” means “no result has been found”).
This shows clearly that our method is extensible to large matrices, since it allows to reduce the size of the
<table>
<thead>
<tr>
<th>Matrix size</th>
<th>2^{-21}</th>
<th>2^{-20}</th>
<th>2^{-19}</th>
<th>2^{-18}</th>
<th>2^{-17}</th>
<th>2^{-16}</th>
<th>2^{-15}</th>
<th>2^{-14}</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>24</td>
<td>6</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>−</td>
<td>117</td>
<td>40</td>
<td>16</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>32</td>
<td>−</td>
<td>−</td>
<td>552</td>
<td>147</td>
<td>14</td>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>64</td>
<td>−</td>
<td>−</td>
<td>−</td>
<td>2303</td>
<td>931</td>
<td>225</td>
<td>48</td>
<td>1</td>
</tr>
</tbody>
</table>
problem to be implemented, while maintaining a good numerical quality. For example, the 64 × 64 accurate matrix multiplication would use 4096 DPCodes. Using our heuristic, we produce a code with 2303 DPCodes having an average error bounded by $2^{-18}$, that is, a reduction of about 45%. Remark that no code with average error bound of $2^{-19}$ is found, which means that even the accurate algorithm (Algorithm 1) has an error no tighter than $2^{-19}$: we can conclude that our heuristic converges towards code having an error close to the best case, but with half less DPCodes. Remark finally that it falls to 1 DPCode if we target an error bound of $2^{-14}$.
6 CONCLUSION
In this article, we discussed the automated synthesis of small and accurate programs for matrix multiplication in fixed-point arithmetic. More particularly, we presented a new strategy based on the merging of row or column vectors of the matrices, so as to reduce the size of the generated code while guaranteeing a certain error bound is satisfied. We also suggested criteria to decide which vectors are to be merged. The efficiency of this approach has been illustrated on a set of benchmarks. It remains now to validate this approach and the synthesized codes on real digital signal processing applications (including State Space filter realizations or MP3 compression).
In addition, the further research direction is threefold. As a first direction, it would be interesting to study the possibility of extending this approach to tri-matrix multiplication, widely used in DSP applications (ALshebeili, 2001), (Qasim et al., 2008), and even to matrix chain multiplications or matrix powers. In this work, we only deal with the basic matrix multiplication algorithm. As a second direction, we could investigate the interest of using other families of matrix multiplication algorithms, such as those based on blocking, and the impact on a fixed-point implementation. Finally, fixed-point arithmetic is widely used for implementing linear algebra algorithms on hardware providing no floating-point unit. In this sense,
another research direction would be the automated synthesis of such algorithms in fixed-point arithmetic, like matrix inversion, for these particular hardware and the adaptation of the techniques presented here to such particular problems.
ACKNOWLEDGEMENTS
The work was supported by the ANR project DEFIS (Ingénierie Numérique et Sécurité 2011, ANR-11-INSE-0008).
REFERENCES
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00860383/document", "len_cl100k_base": 10917, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 48550, "total-output-tokens": 13142, "length": "2e13", "weborganizer": {"__label__adult": 0.0005979537963867188, "__label__art_design": 0.0006108283996582031, "__label__crime_law": 0.0006513595581054688, "__label__education_jobs": 0.000904560089111328, "__label__entertainment": 0.00013208389282226562, "__label__fashion_beauty": 0.00028061866760253906, "__label__finance_business": 0.0003769397735595703, "__label__food_dining": 0.0006151199340820312, "__label__games": 0.001064300537109375, "__label__hardware": 0.00811767578125, "__label__health": 0.0013751983642578125, "__label__history": 0.0005507469177246094, "__label__home_hobbies": 0.00025844573974609375, "__label__industrial": 0.00167083740234375, "__label__literature": 0.00027441978454589844, "__label__politics": 0.0005693435668945312, "__label__religion": 0.0010271072387695312, "__label__science_tech": 0.43310546875, "__label__social_life": 0.00011998414993286131, "__label__software": 0.006244659423828125, "__label__software_dev": 0.5390625, "__label__sports_fitness": 0.0006012916564941406, "__label__transportation": 0.001575469970703125, "__label__travel": 0.00029587745666503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45883, 0.03941]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45883, 0.52767]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45883, 0.84186]], "google_gemma-3-12b-it_contains_pii": [[0, 1188, false], [1188, 5543, null], [5543, 10352, null], [10352, 15344, null], [15344, 19536, null], [19536, 23328, null], [23328, 26954, null], [26954, 31130, null], [31130, 36294, null], [36294, 39683, null], [39683, 41782, null], [41782, 45883, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1188, true], [1188, 5543, null], [5543, 10352, null], [10352, 15344, null], [15344, 19536, null], [19536, 23328, null], [23328, 26954, null], [26954, 31130, null], [31130, 36294, null], [36294, 39683, null], [39683, 41782, null], [41782, 45883, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45883, null]], "pdf_page_numbers": [[0, 1188, 1], [1188, 5543, 2], [5543, 10352, 3], [10352, 15344, 4], [15344, 19536, 5], [19536, 23328, 6], [23328, 26954, 7], [26954, 31130, 8], [31130, 36294, 9], [36294, 39683, 10], [39683, 41782, 11], [41782, 45883, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45883, 0.02299]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
9927d8f642c063debbb6a016251df3dfd6aca11e
|
State Space Analysis using Symmetries on Decision Diagrams
Maximilien Colange, Fabrice Kordon, Yann Thierry-Mieg
LIP6, CNRS UMR 7606, Université P. & M. Curie
4, place Jussieu, 75005 Paris, France
Maximilien.Colange@lip6.fr, Fabrice.Kordon@lip6.fr,
Yann.Thierry-Mieg@lip6.fr
Souheib Baarir
LIP6, CNRS UMR 7606 and
Université Paris Ouest Nanterre La Défense
200, avenue de la République, Nanterre, France
Souheib.Baarir@lip6.fr
Abstract—Two well-accepted techniques to tackle combinatorial explosion in model-checking are exploitation of symmetries and the use of reduced decision diagrams. Some work showed that these two techniques can be stacked in specific cases. This paper presents a novel and more general approach to combine these two techniques. Expected benefits of this combination are:
- in symmetry-based reduction, the main source of complexity resides in the canonization computation that must be performed for each new encountered state; the use of shared decision diagrams allows one to canonize sets of states at once.
- in decision diagram based techniques, dependencies between variables induce explosion in representation size; the manipulation of canonical states allows to partly overcome this limitation.
We show that this combination is experimentally effective in many typical cases.
Keywords—Symmetries, Decision Diagrams, State Space Analysis
I. INTRODUCTION
Formal verification of concurrent systems, while promising push-the-button technology to check the correctness of systems, rapidly encounters the state space explosion problem.
Among many techniques proposed to fight this problem, symbolic approaches based on symmetries [1], [2] and BDD [3] have proven successful in practice.
Symmetries. If we are given a symmetry group $G$ over states and the transition relation, we can build a quotient graph of equivalence classes (also called orbits) of states, that may be exponentially smaller than the full state graph [4]. This quotient graph preserves many properties of interest such as reachability and linear temporal logic provided the property is itself symmetric with respect to $G$.
To build such a graph, the approach most commonly used [1], [5] consists in using a canonical representative of each orbit. However, an orbit may be of exponential size with respect to the number of elements in the state vector. Thus, the computation of a canonical representative of an orbit has exponential worst case complexity in time and/or memory (if the orbit is actually built).
This work was supported by the Délégation Générale pour l’Armement.
Junttila [5] proposes a general definition of this approach for systems whose states are integer vectors and symmetry groups are arbitrary permutation groups. Using the Schreier-Sims representation [6] of permutation groups, he proposes an algorithm effective in practice to compute a representative of an equivalence class.
However, the proposed algorithm only deals with explicit encoding of the state space. Thus, the problem remains hard since the algorithm must be applied on each individual state. This prevents a direct implementation on top of symbolic data structures such as decision diagrams (DD).
Decision Diagrams. Reduced Ordered BDD (ROBDD) were introduced by [7] to compactly represent boolean functions over boolean domains such as large circuits. Since their first use for model-checking [3], many variants of decision diagrams have been proposed. They all allow to manipulate large sets of states symbolically. The DD size can be exponentially smaller than the size of the represented set. Thanks to dynamic programming, algorithms manipulating DD are usually polynomial in the representation size.
Unfortunately, algorithms that manipulate classical explicit data structures must be redesigned to take advantage of DD. This is not always possible, particularly if the algorithm involves separate treatments for every state.
Combining Symmetries and Decision Diagrams. Indeed, initial attempts to combine a symbolic representation of sets of states with a computation of a quotient graph met mitigated success. The problem, identified in [8], is that the orbit relation—allowing to map states to their representative—has exponential size when represented as a BDD, whichever the variable order chosen.
Variations such as using several representatives of an orbit, can be more effective but do not fully exploit the symmetry group.
A slightly different approach to build a quotient graph [2], is to use an abstract representation of orbits. This also allows to exploit symmetries of the transition relation, however this approach can only deal with specific groups of symmetries, and cannot easily be generalized to arbitrary permutation groups.
In practice, this approach can often be successfully com-
bined with symbolic representation of sets of states, as shown in [9] for symmetries limited to the full permutation group, or for the specific framework of Symmetric Nets (a.k.a. Well-Formed Petri nets) in [10] [11]. However, this approach is limited to specific symmetry groups and lacks generality.
Contribution. We propose an algorithm allowing to work with arbitrary symmetry groups, that can be effectively implemented on top of symmetric data structures. Given a total ordering on states, the smallest state in an orbit is considered as its canonical representative.
Instead of directly representing the orbit relation, we introduce a “monotonic” function that, given a state $s$, returns a state $s'$ in the same orbit such that $s' < s$, if such an element $s'$ exists. By repeatedly applying such a monotonic function in a fixpoint, we achieve the same effect as if we were using the orbit relation, without ever having to explicitly compute and represent it.
Because this function operates over sets of states, it avoids individual representative computations for each state, thus leading to a general and efficient algorithm to combine the use of symmetries with symbolic data structures.
Outline. Section II defines the required notions on symmetries and decisions diagrams. Then, section III details how symmetries can be represented on top of decision diagrams. An example and some benchmarks are also provided here before a conclusion in section IV.
II. Preliminaries
This section defines the notions of quotient graph and the type of decision diagrams we use in section III.
A. Quotient graph, definitions
We recall here the theory of symmetry reduction for state space analysis. These definitions are adapted from [5].
Definition 1. Transition system
A transition system is a tuple $(\mathcal{S}, \Delta, S_0)$ such that:
- $\mathcal{S}$ is a finite set of states,
- $\Delta \subseteq \mathcal{S} \times \mathcal{S}$ is the transition relation,
- $S_0 \subseteq \mathcal{S}$ is the set of initial states.
Transitions for $(s_1, s_2) \in \Delta$ are noted $s_1 \rightarrow s_2$. Symmetries of transition systems are defined using a bisimilarity relation between states.
Definition 2. Symmetry
Let $\mathcal{K} = (\mathcal{S}, \Delta, S_0)$ be a transition system. A symmetry of $\mathcal{K}$ is a permutation $g$ over $\mathcal{S}$ such that:
- $g.S_0 = S_0$
- $g$ is congruent with respect to the transition relation: $\forall s_1, s_2 \in \mathcal{S}, s_1 \rightarrow s_2 \iff g.s_1 \rightarrow g.s_2$
$G$, the set of all symmetries of $\mathcal{K}$, is a group because:
- the composition is associative,
- the composition of two symmetries and the inverse of a symmetry are still symmetries.
Definition 3. Equivalence relation $\equiv_G$
Two states $s_1, s_2 \in \mathcal{S}$ are said to be symmetric, denoted $s_1 \equiv_G s_2$, if there is a $g \in G$ such that $g.s_1 = s_2$. $\equiv_G$ is an equivalence relation over $\mathcal{S}$. $[x]_G$ denotes the equivalence class (also called orbit) of $x$ under $\equiv_G$.
We may now define the abstraction of a transition system using $\equiv_G$.
Definition 4. Reduced transition system
$\mathcal{K}' = (\tilde{\mathcal{S}}, \tilde{\Delta}, \tilde{S}_0)$ is a reduction of $\mathcal{K}$ w.r.t. $G$ if and only if:
- $\tilde{\mathcal{S}} \subseteq \mathcal{S}$, $\forall s \in \mathcal{S}, \exists \tilde{s} \in \tilde{\mathcal{S}}: s \equiv_G \tilde{s}$,
- $\tilde{S}_0 \subseteq \tilde{\mathcal{S}}$ and $\forall g \in G, g.\tilde{S}_0 \subseteq \tilde{S}_0$,
- $\tilde{\Delta} \subseteq \tilde{\mathcal{S}} \times \tilde{\mathcal{S}}$,
- if $\tilde{s}_1 \in \tilde{\mathcal{S}}$ and $(\tilde{s}_1, \tilde{s}_2) \in \tilde{\Delta}$, then there exists $\tilde{s}_2 \in \tilde{\mathcal{S}}$ such that $\tilde{s}_2 \equiv_G \tilde{s}_2$ and $(\tilde{s}_1, \tilde{s}_2) \in \Delta$,
- if $(\tilde{s}_1, \tilde{s}_2) \in \tilde{\Delta}$ then there exists $s_2 \in S$ such that $s_2 \equiv_G \tilde{s}_2$ and $(\tilde{s}_1, \tilde{s}_2) \in \Delta$.
A reduction, $\tilde{\mathcal{K}}$ of $\mathcal{K}$ w.r.t. $G$ preserves the reachability property and, under appropriate conditions, linear temporal formulae [12], [4]. Hence, the verification can be done on $\tilde{\mathcal{K}}$. Note that this definition allows to use several representatives per orbit, generalizing the notion of quotient graph. This approach using several representatives yields a larger reduced structure but may be faster to build [5].
An abstract algorithm to compute $\tilde{\mathcal{K}}$ is presented on figure 1. Let $\text{repr}$ be a function that maps an element $s \in \mathcal{S}$ onto its representative $\tilde{s} \in [s]_G$. Let $\text{succ}$ be the function that maps any state $s$ to its successors: $\text{succ}(s) = \{ s' | s \rightarrow s' \}$.
\[
\begin{align*}
\tilde{\mathcal{S}} &:= \text{repr}(\mathcal{S}_0) \\
\tilde{\Delta} &:= \emptyset \\
&\text{repeat} \\
&\quad \text{for } s \in \tilde{\mathcal{S}} \text{ do} \\
&\quad \quad \tilde{s} := \text{repr}(\text{succ}(s)) \\
&\quad \quad \tilde{\Delta} := \tilde{\Delta} \cup \{ (s, \tilde{s}) | \tilde{s} \in \tilde{\Delta} \} \\
&\quad \quad \tilde{\mathcal{S}} := \tilde{\mathcal{S}} \cup \{ \tilde{s} \} \\
&\text{end for} \\
&\text{until } \text{a fixpoint is reached}
\end{align*}
\]
Figure 1. The algorithm to generate $\tilde{\mathcal{K}}$
The size of $\tilde{\mathcal{S}}$ depends on the function $\text{repr}$, as $\tilde{\mathcal{S}} = \text{repr}(\mathcal{S})$, with two extreme cases:
- if $\text{repr}$ is the identity, then $\tilde{\mathcal{S}} = \mathcal{S}$ and $\tilde{\mathcal{K}} = \mathcal{K}$.
- if $\text{repr}$ maps all elements of an orbit onto the same unique element, then $\tilde{\mathcal{S}}$ is in bijection with $\mathcal{S}/G$, and the size of $\tilde{\mathcal{S}}$ is minimal.
Computing such a unique representative is however exponential in time for the worst case: the canonization problem is equivalent to graph isomorphism. This class of complexity is not known to have a polynomial solution [8], [5].
some constraints, see [14]). We define the terminal
no implicit variable ordering and the same variable can
\( \delta \) of DDD is defined inductively by:
\[
\delta_1 \cdot \delta_2 = \begin{cases} \delta_1 & \text{if } \delta_1 \neq \delta_2 \\
\delta_2 & \text{otherwise}
\end{cases}
\]
Any valid sequence. The terminal \( \delta \) of assignment sequences.
A DDD is a data structure for representing a set of
sequences of assignments of the form
\[(x_1 := v_1; x_2 := v_2; \ldots; x_n := v_n), (x_1 := v_1; x_2 := v_2; \ldots; x_n := v_n)
\]
where \( x_i \) are variables and \( v_i \) are integer values. We assume
no implicit variable ordering and the same variable can
occur several times in an assignment sequence (though with
some constraints, see [14]). We define the terminal \( \mathbf{1} \) to
represent the empty assignment sequence, that terminates
any valid sequence. The terminal \( \mathbf{0} \) represents the empty set
of assignment sequences.
**Definition 5 (DDD).** Let \( \text{Var} \) be a set of variables, and for
any \( \omega \) in \( \text{Var} \), let \( \text{Dom}(\omega) \subseteq \mathbb{N} \) be the domain of \( \omega \). The set
\( \mathcal{D} \) of DDD is defined inductively by:
\[
\delta \in \mathcal{D} \text{ if either } \delta \in \{0, 1\} \text{ or } \delta = (\omega, \text{arc}) \text{ with } \omega \in \text{Var}, \text{ and } \text{arc} : \text{Dom}(\omega) \to \mathcal{D} \text{ is a mapping where only a finite subset}
\text{of Dom}(\omega) \text{ maps to other DDD than } \mathbf{0}.
\]
By convention, edges that map to the DDD \( \mathbf{0} \) are not represented.
For instance, consider the DDD shown in figure 2. Each
path in the DDD thus corresponds to a sequence of assign-
ments. In this work, we use DDD to represent states in \( \mathbb{N}^n \),
thus each assignment sequence represents a system state.
**Operations and Homomorphisms.** DDD support standard
set operations: \( \cup, \cap, \setminus \). The semantics of these operations
are based on the sets of assignment sequences that the DDD
represent.
DDD also offer a concatenation \( \delta_1 \cdot \delta_2 \) which replaces
terminal \( \mathbf{1} \) of \( \delta_1 \) by \( \delta_2 \). This corresponds to a cartesian pro-
duct. Basic and inductive homomorphisms are also introduced
to define application specific operations. A more detailed
description of DDD homomorphisms can be found in [14].
A basic homomorphism is a mapping \( \Phi : \mathcal{D} \to \mathcal{D} \)
satisfying \( \Phi(\mathbf{0}) = \mathbf{0} \) and \( \forall \delta, \delta' \in \mathcal{D}, \Phi(\delta \cup \delta') = \Phi(\delta) \cup \Phi(\delta') \).
Many basic homomorphisms are hard-coded. The sum +
operation between two homomorphisms \( \forall \delta \in \mathcal{D}, (\Phi_1 + \Phi_2)(\delta) = \Phi_1(\delta) \cup \Phi_2(\delta) \)
and the composition of two homomorphisms \( \circ (\Phi_1 \circ \Phi_2)(\delta) = \Phi_1(\Phi_2(\delta)) \) are themselves homomorphisms.
A homomorphism \( c \) is a selector if \( \forall \delta \in \mathcal{D}, c(\delta) \subseteq \delta \).
This allows to represent Boolean conditions, as \( c \) selects
states satisfying a given condition; thus the negation of \( c \)
is \( \overline{c}(\delta) = \delta \setminus c(\delta) \). As a shorthand for "if-then-else", we use
IfThenElse\((c, h_1, h_2) = h_1 \circ c + h_2 \circ \overline{c}, \) where \( h_1 \) and \( h_2 \) are
homomorphisms.
The fixpoint \( h^* \) of a homomorphism, defined as \( h^*(\delta) =
h^k(\delta) \) where \( k \) is the smallest integer such that \( h^k(\delta) =
h^{k+1}(\delta) \), is also a homomorphism provided a finite \( k \) exists.
Besides providing a high level way of specifying a
system’s transition relation, homomorphisms can be used
to express many model checking algorithms directly. For
instance, given a DDD \( s_0 \) representing initial states and a
homomorphism \( \text{succ} \) representing the transition relation,
we can obtain reachable states by the equation \( \text{Reach} =
(\text{succ} + 1d)^*(s_0) \).
Specifying model checking problems as homomorphisms
allows the software library to enable automatic rewrites
that yield much better performances, such as the saturation
algorithm [15].
**III. Symmetries and Symbolic Structures**
In this section we will develop our ideas about how to
combine Symmetries and Symbolic Structures in a general
framework.
**A. Assumptions**
**States** We make the assumption that the system’s states \( S \)
are vectors of integers, of fixed size \( n: S \subseteq \mathbb{N}^n \).
**Symmetries** We consider symmetries that permute
the indexes: \( \forall g \in G, \forall v = (v_1, v_2, \ldots, v_n) \in S, g.v =
(v_{g1}, v_{g2}, \ldots, v_{gn}) \). The group of all permutations over a
set of size \( n \) is denoted by \( S_n \).
We then manipulate symmetry groups as sets of permutations.
Conversely, given a set of permutations \( H \), let \( \langle H \rangle \)
denote the group generated by \( H \).
States are totally ordered. We use lexicographic ordering, noted \(<\). The canonical representative \(\hat{s}\) of an orbit \([s]_G\) is defined as its smallest element (with respect to \(<\)). Thus, \(\forall s \in S, \hat{s} = \text{min}[s]_G\).
B. Symbolic Symmetry algorithm
Given these premises, we use the algorithm of figure 3 to canonize a set of states.
\[
\text{set\_canonize}(H \subseteq S_n, S \subseteq \mathbb{N}^n):
\]
\[
\text{repeat}
\text{for } g \in H \text{ do}
S' := \{s|s \in S, g.s < s\}
S := S \cup g.S'
S' := S \setminus S'
\text{end for}
\text{until } S \text{ no longer evolves}
\text{return } S
\]
Figure 3. Symbolic algorithm to canonize a set of states.
This algorithm iterates over the permutations of \(H\), applying each one only to the states that it reduces. If the permutations in \(H\) are permutations of the symmetry group \(G\) of the system, we are assured that at each step of the algorithm, each state is either left as is, or mapped to a strictly smaller state belonging to its orbit. Since each orbit has a minimum (its canonical representative) this algorithm is guaranteed to converge.
Admittedly, the algorithm might visit each state of an orbit (in decreasing order, one by one), yielding worst case exponential complexity. Since the problem is equivalent to graph isomorphism, this is not surprising. In practice however, with an appropriate choice of a small set of permutations in \(H\), this algorithm can be quite effective.
Let us note that the order in which the permutations of \(H\) are considered in the "for" loop (or equivalently, in the composition \(\bigcirc_{g \in H}\)) does not impact correctness, but may impede performance.
Actually, the choice of \(H\) is critical to overall performance of this algorithm. If \(H = G\), then this algorithm converges after a single iteration of the outer loop ("repeat"). In other words, for each state, \(H\) contains the permutation that maps it to its representative. However, this means that, on the worst case, the size of \(H\) is exponential in \(n\). This is congruent with the observations of [8] in which the orbit relation is shown to be exponential in representation size.
A contrario, when \(H\) is small, many iterations may be necessary for the algorithm to converge, but each element of \(H\) is likely to reduce larger subsets \(S'\). Since the complexity of applying a permutation to a set of states is related to the representation size (in DDD nodes) and not to the number of states in the set, manipulating larger sets lowers the overall complexity.
Monotonic\(<\) Property. To obtain minimality, we would like to choose \(H\) such that \(\text{set\_canonize}(H,S) = \{\text{min}[s]_G|s \in S\}\).
In essence this means we require that any state \(s\) that is not the minimum of its orbit \([s]_G\) can be reduced (according to \(<\)) by applying a permutation of \(H\).
Definition 6 (monotonic\(<\)). Let \(G\) be a subgroup of \(S_n\). \(H \subseteq G\) is monotonic\(<\) w.r.t. \(G\) if and only if:
\[
\forall s \in S, (\exists g \in G, g.s < s \implies \exists h \in H, h.s < s).
\]
In algorithm 3, when states can no longer be reduced by any permutation of \(H\), by definition of the monotonic\(<\) property, the states in \(S\) are the canonical representatives of the input states. When \(H\) is monotonic\(<\) w.r.t. \(G\), the algorithm returns the set of canonical representatives of the input states.
If \(H\) is not monotonic\(<\) w.r.t. \(G\), the algorithm behaves like the one of figure 1 when several representatives are used.
C. Symbolic encoding
States being elements of \(\mathbb{N}^n\) are naturally represented as a DDD of \(n\) variables. Note that by assumption, the system size is fixed in number of variables. For systems requiring to dynamically allocate variables, a pool size bound must then be known a priori. However we are allowed to use integers with a priori unknown bounds as variables. This feature of DDD is exploited here, but the algorithm could work with boolean variables and any type of Decision Diagrams. Labels of states, if we consider a Kripke structure instead of a transition system, can be encoded as additional state variables.
To encode algorithm of figure 3 using homomorphisms, we define for any permutation \(g \in S_n\):
- \(\text{reduces}(g)\), a selector homomorphism to retain states that are reduced by \(g\), i.e. \(\text{reduces}(g)(S) = \{s|s \in S, g.s < s\}\).
- \(\text{apply}(g)\), a homomorphism to apply \(g\) to each state of a set, i.e. \(\text{apply}(g)(S) = \{g.s|s \in S\}\).
The full algorithm is then expressed by the equation:
\[
\text{set\_canonize}(H) = \\
(\bigcirc_{g \in H} \text{IfThenElse(\text{reduces}(g), \text{apply}(g), \text{Id}))}^* \\
\]
Since convergence is ensured by the fact each orbit has a minimum, the fixpoint \(^*\) is well-defined. The homomorphism \(\text{set\_canonize}(H)\) can be applied to any set of states, yielding their canonical representatives when \(H\) is monotonic\(<\).
Apply and Reduces. The homomorphism \(\text{reduces}\), given that we are using lexicographic order, and that states are in \(\mathbb{N}^n\), is expressed as a composition of variable comparisons. For instance, consider the permutation \(g = (2,3,1,4)\) of \(S_4\). We have \(g^{-1} = (3,1,2,4)\). Hence \(g\) reduces \(s = (s_1, s_2, s_3, s_4)\) iff
\[
s_{g^{-1},1} < s_1 \lor (s_{g^{-1},1} = s_1 \land (s_{g^{-1},2} < s_2 \lor (s_{g^{-1},2} = s_2 \land (\ldots)))) \\
\]
This general formula is instantiated for this specific \( g\) in the following way:
\[
s_3 < s_1 \lor (s_3 = s_1 \land (s_1 < s_2 \lor (s_1 = s_2 \land s_2 < s_3)))
\]
Let us note that since position 4 is invariant by \( g\), there are only three nested variable comparisons. Subsequent conditions are trivially simplified away. This condition is expressed using a selector homomorphism allowing comparison (by \(<\) and \(=\)) of the value of two variables of a state. The full condition homomorphism is expressed using composition \( \circ \) for \( \land \) and the sum \( + \) for \( \lor \).
The homomorphism apply is built as a composition of transpositions of adjacent elements noted \( \tau_{i,i+1} \). The original DDD definition [14] includes a general homomorphism to swap arbitrary variables of a DDD. Transposition of adjacent variables is a particular case of this.
We compute a path with the minimal number of these transpositions necessary to achieve the desired effect and compose them to build apply. For instance, with \( g = (2,3,1,4)\) of \( s_4\),
\[
g = \tau_{2,3} \circ \tau_{1,2}
\]
Let us note that the DDD homomorphism framework allows to easily define these complex operations, hence the implementation using libDDD [16] is straightforward. As a beneficial side effect, since a given transposition \( \tau \) can occur in several permutations, various permutations may benefit from the cache for transpositions.
Our algorithm can be implemented using other decision diagrams libraries, although swap and comparison of variables may not be offered natively.
Note that the same algorithmic bricks can be used to compute the orbit of states, using the equation:
\[
\text{orbit}(H) = (\bigcup_{g \in H} \text{apply}(g) + \text{Id})^*
\]
If \( (H) = G\), applying \( \text{orbit}(H)\) to a set of states \( S\) returns the set \( \bigcup_{s \in S} [s]_G\).
**Illustrative example.** Let us detail the run of the algorithm on a small illustrative example. Figure 4 shows the intermediate DDD produced by the application of \text{set\_canonize}(H)\) to a system of three variables. With \( G = S_3\) as symmetry group, we choose \( H = \{ \tau_{1,2}, \tau_{2,3} \}\), which is monotonic\(<\) w.r.t. \( G\), as will be proved in III-D. We focus on the inner loop in algorithm 3. Each step corresponds to the application of an element \( g\) of \( H\) to the states reduced by \( g\) in the current DD. At the end of the algorithm, another iteration is necessary to check for convergence.
As we can see through this toy example, each step of the algorithm simultaneously reduces several states. In a single step, each permutation reduces all the states it can, even if they belong to different orbits.
States that belong to the same orbit are progressively collapsed onto their representative. Because of sharing of sub-structures, notice that states \( (2,1,3)\) and \( (2,3,1)\) in 4(a) are collapsed onto \( (2,1,3)\) in 4(b). \( (2,1,3)\) is not a canonical representative, but it is smaller than \( (2,3,1)\). At this step, the two states are merged, allowing to share any subsequent canonization step. In general, each step –with complexity polynomial in the DD size– might merge exponentially many states. This contrasts with explicit approaches that canonize all these states individually.
**D. Finding a monotonic\(<\)**
As previously explained, whatever the choice of \( H \subseteq G\) the algorithm of Fig. 3 is still valid. On the other hand, the choice of \( H\) is critical to its efficiency. Ideally \( H\) should be monotonic\(<\) w.r.t. \( G\) to obtain maximal reduction, and heuristically for decision diagram based implementations, \( H\) should be as small as possible.
In the general case, the computation of a set \( H\) monotonic\(<\) w.r.t. \( G\) that is of minimal size, is in \( O(n^6)\) with a brute force algorithm.
Efficient data structures to store groups of permutations such as the Schreier-Sims representation [6] could provide a candidate to define \( H\). However, the generating set they provide is not monotonic\(<\) in general. Even when it is, its size can be much larger than necessary. For instance, the Schreier-Sims representation of the full group of permutations \( S_n\) is quadratic in \( n\), whereas a monotonic\(<\) set of size \( n\) exists.
We provide in this section an appropriate set \( H\) for common symmetry groups.
**Proposition 1.** The set of adjacent transpositions is monotonic\(<\) for \( S_n\).
**Proof:** Let \( s = (s_1, \ldots, s_n) \in \mathbb{N}^n\) be a state, such that \( \exists g \in S_n, g \cdot s < s\). This means that \( s\) is not sorted, and therefore, there exists an index \( i\) such that \( s_i > s_{i+1}\). Thus \( s' = \tau_{i,i+1} \cdot s = (s_1, \ldots, s_{i+1}, s_i, \ldots, s_n) < s\).
**Proposition 2.** Let \( r\) be the rotation \( (2,3,\ldots,n,1)\), and \( G = \langle r \rangle = \{ \text{Id}, r, r^2, \ldots, r^{n-1} \}\). Then \( G\) is the only monotonic\(<\) set w.r.t. \( G\).
**Proof:** For \( 0 < i \leq n\), let \( s = (1,2,\ldots,i-1,0,i+1,\ldots,n)\). Then the only rotation in \( G\) that reduces \( s\) is \( r^i\).
These two groups are the most frequently encountered groups of symmetries in the literature, as they occur naturally in many symmetric systems. This gives us monotonic\(<\) sets of size \( n\) for these two groups. The two properties above are still true when considering groups that act on a subset of the system variables.
When the symmetries of the system arise from several symmetry groups (i.e. symmetries of subsystems), we choose to use the union of their respective monotonic\(<\) sets.
Let \( G = \langle E \cup F \rangle\), and \( H_E, H_F\) be monotonic\(<\) sets w.r.t. \( E\) and \( F\) respectively. \( H_G = H_E \cup H_F\) is monotonic\(<\) w.r.t. \( G\) if \( E\) and \( F\) act on disjoint sets of variables. Otherwise, we are not ensured that \( H_G\) is monotonic\(<\) w.r.t. \( G\), but it can still be used as a good candidate set for the algorithm.
Other types of symmetries on data values, such as \( v = (obj_1, obj_2, \ldots) \) and \( g.v = (obj_{g.1}, obj_{g.2}, \ldots) \) can be integrated into our algorithm seamlessly. This symmetry is of interest as it corresponds to the case where \( obj_1 \) and \( obj_2 \) contain similar objects and \( ref_1, ref_2 \) are references to these objects, that need to be reindexed if we exchange the positions of the two objects. This case is encountered when canonizing the memory (heap in particular) of a concurrent system.
### E. Assessment
In this section, we assess our algorithm on some examples. We compare our approach to an implementation of Junttila’s algorithm for symmetry reduction and to symbolic model checking without the use of symmetries.
The tool LoLa [17] uses a Schreier-Sims representation of the symmetry group and produces a reduced system with potentially several representatives per orbit. It works with explicit data structures, thus its memory consumption grows linearly with the number of representative states. LoLa is a well maintained and mature software.
On the other hand, we compare our algorithm to libBits [16], [18], a model checker implemented using DDD, but no symmetries. Up to the addition of symmetries, it uses the same encoding (states, transition relation) as our prototype DD-Sym.
Table I compares the size of the produced state space (time and memory consumption during its elaboration), for these three tools. Experiments were run on a Xeon 64 bits at 2.6 GHz processor with a time limitation of 1 hour and memory limit of 5Gbytes. The following models (shown in annex) were processed:
**Software Product Line** [19]. It is a model extracted and then adapted from a case study concerning a software configuration process. Features and configuration options are
fully symmetric domains, that do not interact directly. Thus, the union of their respective $\text{monotonic}_{<}$ sets is $\text{monotonic}_{<}$ w.r.t. the symmetry group of the model. LoLa and DD-Sym both compute the quotient graph, with one representative per orbit. The symmetry group exhibited by this model is particularly simple; this means the canonization procedure has a relatively low complexity. The classical DD implementation has the best performance on this example, and LoLa the worst. However, DD-Sym’s memory consumption does not grow beyond 1.3Gbyte, at the point where the DD garbage collector activates. This means that the DD-Sym, on this model, does not compute intermediate structures whose size exceeds 1.3Gbyte, and could actually run within a memory confined to 1.3Gbyte. This limit is paid in time, as the garbage collection frees the DD caches. In deed, DD-Sym fails earlier than the classical DD implementation, due to time confinement. On the other hand, the classical DD implementation, that uses the same garbage collection mechanism, has a much bigger memory peak.
**Clients servers** [10]. It models a simple remote procedure call protocol between $n$ clients and $n$ servers sharing a common communication channel. Clients (resp. servers) are considered indistinguishable up to their identity. Thus, we have a full symmetry group on clients and a full symmetry group on servers. We use the union of their respective $\text{monotonic}_{<}$ sets, which is not itself $\text{monotonic}_{<}$ w.r.t. the symmetries of the whole system. However, on this model, DD-Sym scales better than the other tools. Although the reduction factor is good, the multiple representatives approach in LoLa still retains too many states to cope with larger scale parameters. DDDs are able to handle up to 2 billion states, but fail earlier than our prototype.
**SaleStore** [11]. It models a shopping mall where clients can shop for gifts. Clients and gifts form two fully symmetric domains, that interact when a client buys some gifts. Similarly to the clients servers model, we use the union of
the monotonic$_{<}$ sets. Again, the number of states in LoLa’s representation grows very fast; it fails before the purely symbolic approach. DD-Sym allows the symbolic approach to scale up to much larger model parameters, as the number of representatives grows very slowly.
Discussion on the Results of Table I and figures 5 to 8. The three tools do not compute the same representation of the state space, hence they don’t always find the same number of states. The classical DD tool computes the full transition system without any symmetry reduction, while both Lola and DD-Sym compute a quotient structure and the number of states shown is actually the number of orbit representatives computed.
Both DD-Sym and Lola use an algorithm which might lead to several representatives of an orbit being represented. LoLa’s strategy to compute several representatives for each orbit reduces the cost of canonization, and is supposed to be a good trade-off between the time-consuming canonization and the size of the quotient graph. Our own algorithm may produce several representatives if the provided set $H$ is not monotonic$_{<}$ w.r.t. $G$.
In order to control when the tested tools achieve full reduction, we have processed small instances of the models with another tool that is guaranteed to perform full reduction. In practice, both LoLa and DD-Sym compute a single representative per orbit for the Software Product Line model, achieving full reduction. For the two other models, the sets $H$ we use to canonize are not monotonic$_{<}$. In spite of this, DD-Sym only computes a single representative per orbit for the SaleStore model. On the client-server model, neither LoLa nor DD-Sym achieve maximum reduction, but LoLa computes many more representative than we do.
LoLa fails on the Software Product Line model, due to the time confinement, although it consumes 33% memory more than the DD classical implementation. As previously explained, DD-Sym’s memory consumption reaches a maximum when the garbage collector activates. DD-Sym then fails on bigger instances due to time confinement, but would consume less memory than the classical DD implementation if the time limit were higher.
For the two other models, DD-Sym exhibits the best performances, and LoLa the worst. DD-Sym handles the models for higher scaling parameters, and its time and memory consumption grows slower than the two other tools. LoLa handles fewer instances and its time and memory consumption are higher than the pure-DD tool.
The great number of states handled by the DD tool with a reasonable amount of memory shows the strength of decision diagrams to compress large state sets. Moreover, this assessment fully validates our novel approach, as it favorably compares to both Junttila’s algorithm and the classical DD approach. We thus have designed a way to combine the two symbolic approaches so that their respective optimizations can stack.
These results, while preliminary, are encouraging. It allows our DD checker to scale better for some symmetric models. It also favorably compares to explicit symmetry-based methods.
DD-Sym will be integrated into the ITS framework, and extended to use Hierarchical Set Decision Diagrams [18].
IV. CONCLUSION
We have presented a novel approach to combine symmetries with symbolic data structures. It relies on the choice of an appropriate subset of symmetries, that allows to compute a reduced state space without needing to represent the orbit relation. Our algorithm supports arbitrary symmetry groups. Even if a monotonic$_{<}$ set cannot be computed easily, we provide an approximation that works well in practice for commonly encountered symmetries. We ensure correctness even if the provided set of symmetries does not respect the monotonic$_{<}$ property; this simply yields a larger state space.
Although our experiments are so far limited, we show that this approach can improve a method that only uses decision diagrams.
We are currently investigating the definition of monotonic$_{<}$ sets for other symmetries, such as those encountered when considering memory addresses and pointers.
Another perspective in the context of local symmetries involves adaptation of the set used for canonization during the state space construction.
REFERENCES
|
{"Source-Url": "http://www.lsv.fr/Publis/PAPERS/PDF/CKTB-acsd12.pdf", "len_cl100k_base": 9311, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 38158, "total-output-tokens": 11482, "length": "2e13", "weborganizer": {"__label__adult": 0.00044035911560058594, "__label__art_design": 0.0006251335144042969, "__label__crime_law": 0.0006532669067382812, "__label__education_jobs": 0.0010042190551757812, "__label__entertainment": 0.00012862682342529297, "__label__fashion_beauty": 0.00024390220642089844, "__label__finance_business": 0.0003883838653564453, "__label__food_dining": 0.0005488395690917969, "__label__games": 0.0010280609130859375, "__label__hardware": 0.001900672912597656, "__label__health": 0.001129150390625, "__label__history": 0.0005030632019042969, "__label__home_hobbies": 0.0001885890960693359, "__label__industrial": 0.0011386871337890625, "__label__literature": 0.0003736019134521485, "__label__politics": 0.0005970001220703125, "__label__religion": 0.0008168220520019531, "__label__science_tech": 0.32470703125, "__label__social_life": 0.0001386404037475586, "__label__software": 0.00803375244140625, "__label__software_dev": 0.6533203125, "__label__sports_fitness": 0.00042366981506347656, "__label__transportation": 0.0010595321655273438, "__label__travel": 0.0002918243408203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39159, 0.0153]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39159, 0.60454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39159, 0.85604]], "google_gemma-3-12b-it_contains_pii": [[0, 4819, false], [4819, 10900, null], [10900, 15909, null], [15909, 21419, null], [21419, 27462, null], [27462, 29276, null], [29276, 31387, null], [31387, 36833, null], [36833, 39159, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4819, true], [4819, 10900, null], [10900, 15909, null], [15909, 21419, null], [21419, 27462, null], [27462, 29276, null], [29276, 31387, null], [31387, 36833, null], [36833, 39159, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39159, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39159, null]], "pdf_page_numbers": [[0, 4819, 1], [4819, 10900, 2], [10900, 15909, 3], [15909, 21419, 4], [21419, 27462, 5], [27462, 29276, 6], [29276, 31387, 7], [31387, 36833, 8], [36833, 39159, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39159, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
4f92c26e788804a483b0bd6accc7a670ea6576f0
|
Capturing and Enhancing \textit{In Situ} System Observability for Failure Detection
Peng Huang, \textit{Johns Hopkins University}; Chuanxiong Guo, \textit{ByteDance Inc.}; Jacob R. Lorch and Lidong Zhou, \textit{Microsoft Research}; Yingnong Dang, \textit{Microsoft}
https://www.usenix.org/conference/osdi18/presentation/huang
This paper is included in the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’18).
October 8–10, 2018 • Carlsbad, CA, USA
ISBN 978-1-939133-08-3
Open access to the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX.
Capturing and Enhancing In Situ System Observability for Failure Detection
Peng Huang
Johns Hopkins University
Chuanxiong Guo
ByteDance Inc.
Jacob R. Lorch
Microsoft Research
Yingnong Dang
Microsoft
Abstract
Real-world distributed systems suffer unavailability due to various types of failure. But, despite enormous effort, many failures, especially gray failures, still escape detection. In this paper, we argue that the missing piece in failure detection is detecting what the requesters of a failing component see. This insight leads us to the design and implementation of Panorama, a system designed to enhance system observability by taking advantage of the interactions between a system’s components. By providing a systematic channel and analysis tool, Panorama turns a component into a logical observer so that it not only handles errors, but also reports them. Furthermore, Panorama incorporates techniques for making such observations even when indirection exists between components. Panorama can easily integrate with popular distributed systems and detect all 15 real-world gray failures that we reproduced in less than 7 s, whereas existing approaches detect only one of them in under 300 s.
1 Introduction
Modern cloud systems frequently involve numerous components and massive complexity, so failures are common in production environments [17, 18, 22]. Detecting failures reliably and rapidly is thus critical to achieving high availability. While the problem of failure detection has been extensively studied [8, 13, 14, 20, 24, 29, 33, 34, 47], it remains challenging for practitioners. Indeed, system complexity often makes it hard to answer the core question of what constitutes a failure.
A simple answer, as used by most existing detection mechanisms, is to define failure as complete stoppage (crash failure). But, failures in production systems can be obscure and complex, in part because many simple failures can be eliminated through testing [49] or gradual roll-out. A component in production may experience gray failure [30], a failure whose manifestation is subtle and difficult to detect. For example, a critical thread of a process might get stuck while its other threads including a failure detector keep running. Or, a component might experience limplock [19], random packet loss [26], fail-slow hardware [11, 25], silent hanging, or state corruption. Such complex failures are the culprits of many real-world production service outages [1, 3, 4, 6, 10, 23, 30, 36, 38].
As an example, ZooKeeper [31] is a widely-used system that provides highly reliable distributed coordination. The system is designed to tolerate leader or follower crashes. Nevertheless, in one production deployment [39], an entire cluster went into a near-freeze status (i.e., clients were unable to write data) even though the leader was still actively exchanging heartbeat messages with its followers. That incident was triggered by a transient network issue in the leader and a software defect that performs blocking I/Os in a critical section.
Therefore, practitioners suggest that failure detection should evolve to monitor multi-dimensional signals of a system, aka vital signs [30, 37, 44]. But, defining signals that represent the health of a system can be tricky. They can be incomplete or too excessive to reason about. Setting accurate thresholds for these signals is also an art. They may be too low to prevent overreacting to benign faults, or too high to reliably detect failures. For example, an impactful service outage in AWS was due to a latent memory leak, which caused the system to get stuck when serving requests and eventually led to a cascading outage [10]. Interestingly, there was a monitor for system memory consumption, but it triggered no alarm because of “the difficulty in setting accurate alarms for a dynamic system” [10]. These monitoring challenges are further aggravated in a multi-tenant environment where both the system and workloads are constantly changing [44].
In this paper, we advocate detecting complex production failures by enhancing observability (a measure of how well components’ internal states can be inferred from their external interactions [32]). While defining the absolute health or failure of a system in isolation is tricky,
void syncWithLeader(long newLeaderXid) {
QuorumPacket qp = new QuorumPacket();
readPacket(qp);
try {
if (qp.getType() == Leader.SNAP) {
deserializeSnapshot(qp); // Check if the leader sent a snapshot
String sig = leaderIs.read("signature");
if (!sig.equals("BenWasHere"))
throw new IOException("Bad signature");
} else {
LOG.error("Unexpected leader packet.");
System.exit(13);
}
} catch (IOException e) {
LOG.warn("Exception sync with leader", e);
sock.close();
}
}
Listing 1: A follower requesting a snapshot from the leader tries to handle or log errors but it does not report errors.
Panorama provides unified abstractions and APIs to report observations, and a distributed service to selectively exchange observations. Also, importantly, Panorama keeps the burden on developers low by automatically inserting report-generation code based on offline static analysis. In this way, Panorama automatically converts every component into an observer of the components it interacts with. This construction of in-situ observers differentiates Panorama from traditional distributed crash failure detection services [34, 47], which only measure superficial failure indicators.
In applying Panorama to real-world system software, we find some common design patterns that, if not treated appropriately, can reduce observability and lead to misleading observations. For example, if a requester submits requests to a provider, but an indirection layer temporarily buffers the request, the request may appear successful even though the provider has failed. This can cause the requester to report positive evidence about the provider. We study such common design patterns and characterize their impact on system observability (§4). Based on this, we enhance Panorama to recognize these patterns and avoid their effects on observability.
For failure detection, Panorama includes a decision engine to reach a verdict on the status of each component based on reported observations. Because these reports come from errors and successes in the execution paths of requester components instead of artificial, non-service signals, our experience suggests that a simple decision algorithm suffices to reliably detect complex failures.
We have implemented the Panorama system in Go and the static analyzer on top of Soot [46] and AspectJ [2]. Our experiences show that Panorama is easy to integrate with popular distributed systems including ZooKeeper, Cassandra, HDFS, and HBase. Panorama significantly outperforms existing failure detectors in that: (1) it detects crash failures faster; (2) it detects 15 real-world gray failures in less than 7 s each, whereas other detectors only detect one in 86 s; (3) Panorama not only detects, but also locates failures. Our experiments also show that Panorama is resilient to transient failures and is stable in normal operations. Finally, Panorama introduces only minor overhead (less than 3%) to the systems we evaluate it on.
## 2 Problem Statement
We consider failure detection in the context of a large distributed system $S$ composed of several subsystems. Each subsystem has multiple components. In total, $S$ contains $n$ processes $P_1, P_2, \ldots, P_n$, each with one or more threads. The whole system lies within a single administrative domain but the code for different system components may be developed by different teams. For example, a stor-
mage system may consist of a front-end tier, a distributed load service, a caching middleware, a messaging service, and a persistence layer. The latter subsystem include metadata servers, structured table servers, and extent data nodes. An extent data node may be multi-threaded, with threads such as a data receiver, a data block scanner, a block pool manager, and an IPC-socket watcher. We assume the components trust each other, collectively providing services to external untrusted applications.
The main goal of failure detection is to correctly report the status of each component; in this work the only components we consider are processes and threads. Traditional failure detectors focus on crash failure, i.e., using only statuses UP and DOWN. We aim to detect not only crash failure but also gray failure, in which components experience degraded modes “between” UP and DOWN. The quality of a failure detector is commonly characterized by two properties: completeness, which requires that if a component fails, a detector eventually suspects it; and accuracy, which requires that a component is not suspected by a detector before it fails. Quality is further characterized by timeliness, i.e., how fast true failures are detected. Failure detectors for production systems should also have good localization, i.e., ease of pinpointing each failure in a way that enables expedient corrective action.
3 Panorama System
3.1 Overview
At a high level, Panorama takes a collaborative approach: It gathers observations about each component from different sources in real time to detect complex production failures. Collaborative failure detection is not a new idea. Many existing crash-failure detectors such as membership services exchange detection results among multiple components using protocols like gossip [47]. But, the scope of where the detection is done is usually limited to component instances with similar functionality or roles in a particular layer. Panorama pushes the detection scope to an extreme by allowing any thread in any process to report evidence, regardless of its role, layer, or subsystem. The resulting diverse sources of evidence enhance the observability of complex failures.
More importantly, instead of writing separate monitoring code that measures superficial signals, Panorama’s philosophy is to leverage existing code that lies near the boundaries between different components. Examples of such code include when one thread calls another, and when one process makes an RPC call to another. This captures first-hand observations, especially runtime errors that are generated from the executions of these code regions in production. When Panorama reports a failure, there is concrete evidence and context to help localize
where the failure happened.
Figure 1 shows an overview of Panorama. Panorama is a generic detection service that can be plugged into any component in a distributed system. It provides unified abstractions to represent observations about a component’s status, and a library for reporting and querying detection results. For scalability, we use a decentralized architecture: for each $P_i$ in a monitored system, a co-located Panorama instance (a separate process) maintains a Local Observation Store (LOS) that stores all the observations that are made either by or about $P_i$. A local decision engine in the instance analyzes the observations in that LOS and makes a judgment about the process’s status. A central verdict server allows easy querying of, and arbitration among, these decentralized LOSes.
The Panorama service depends on many logical observers within the running components in the monitored system. Unlike traditional failure detectors, these logical observers are not dedicated threads running detection checks. Rather, they are diverse hooks injected into the code. These hooks use a thin library to collect and submit observations to the LOS via local RPC calls. They are inserted offline by a tool that leverages static analysis ($\S$5). To achieve timeliness, the observations are reported in real time as $P_i$ executes. Panorama observers collect evidence not only about the locally attached component, but, more importantly, about other components that the observer interacts with. However, if $P_i$ never interacts with $P_j$, $P_i$ will not put observations about $P_j$ into its LOS. Panorama runs a dissemination protocol to exchange observations among a clique of LOSes that share common interaction components.
3.2 Abstractions and APIs
To be usable by arbitrary distributed system components, Panorama must provide a unified way to encapsulate ob-
As discussed earlier, the only components we consider are processes and threads. A component is an observer if it makes observations and a subject if it is observed; a component may be both an observer and a subject. A status is a categorization of the health of a subject; it can be only a small pre-determined set of values, including HEALTHY, DEAD, and a few levels of UNHEALTHY. Another possible value is PENDING, the meaning and use of which we will discuss in §5.4.
When an observer sees evidence of a subject’s status, that constitutes an observation. An observation contains a timestamp of when the observation occurred, the identities of the observer and subject, and the inferred status of the subject. It also contains a context describing what the observer was doing when it made the observation, at a sufficient granularity to allow Panorama to achieve fine-grained localization of failures. For instance, the context may include the method the observer was running, or the method’s class; the API call the observer was making to the subject; and/or the type of operation, e.g., short-circuit read, snapshot, or row mutation. A verdict is a summary, based on a decision algorithm, of a set of observations of the same subject.
Each Panorama instance provides an API based on the above abstractions. It can be invoked by a local component, by another Panorama instance, or by an administration tool. When a component decides to use Panorama, it registers with the local Panorama instance and receives a handle to use for reporting. It reports observations using a local RPC ReportObservation; when it is done reporting it unregisters. A Panorama instance can register multiple local observers. If a component does not intend to report observations but merely wants to query component statuses, it need not register.
Each Panorama instance maintains a watch list: the set of subjects for which it keeps track of observations. By default, Panorama automatically updates this list to include the components that registered observers interact with. But, each observer can explicitly select subjects for this list using StartObserving and StopObserving. If another observer in another Panorama instance makes an observation about a subject in the watch list, that observation will be propagated to this instance with a remote RPC LearnObservation. Panorama calls JudgeSubject each time it collects a new observation, either locally or via remote exchange.
### 3.3 Local Observation Store
Each Panorama instance maintains a Local Observation Store (LOS) that stores all observation reports made by colocated components. The subjects of these reports include both local and remote components.
The LOS consists of two main structures: the raw observation store and the verdict table. The LOS partitions the raw observation store by subject into multiple tables for efficient concurrent access. Each record in a subject’s table corresponds to a single observer; it stores a list of the $n$ most recent observations of that subject made by that observer. The LOS is kept in memory to enable efficient access; asynchronously, its content is persisted to a local database to preserve the full observation history, for facilitating troubleshooting later. The raw observation store is synchronized with that of other Panorama instances that share common subjects. Therefore, an LOS contains observations made both locally and remotely.
A local decision engine analyzes the raw observation store to reach a verdict for each subject. This decision result is stored in the verdict table, keyed by subject. The verdict table is not synchronized among Panorama instances because it does not have to be: the decision algorithm is deterministic. In other words, given synchronized raw observations, the verdict should be the same. To enable convenient queries over the distributed verdict tables to, e.g., arbitrate among inconsistent verdicts, Panorama uses a central verdict server. Note, though, that the central verdict server is not on any critical path.
Including old observations in decisions can cause misleading verdicts. So, each observation has a Time-to-Live parameter, and a background garbage collection (GC) task runs periodically to retire old observations. Whenever GC changes the observations of a subject, the decision engine re-computes the subject’s verdict.
### 3.4 Observers
Panorama does not employ dedicated failure detectors. Instead, it leverages code logic in existing distributed-system components to turn them into in-situ logical observers. Each logical observer’s main task is still to provide its original functionality. As it executes, if it encounters an error related to another component, in addition to handling the error it will also report it as an observation to Panorama. There are two approaches to turn
a component into a Panorama observer. One is to insert Panorama API hooks into the component’s source code. Another is to integrate with the component’s logs by continuously parsing and monitoring log entries related to other components. The latter approach is transparent to components but captures less accurate information. We initially adopted the latter approach by adding plug-in support in Panorama to manage log-parsing scripts. But, as we applied Panorama to more systems, maintaining these scripts became painful because their logging practices differed significantly. Much information is also unavailable in logs [50]. Thus, even though we still support logging integration, we mainly use the instrumentation approach. To relieve developers of the burden of inserting Panorama hooks, Panorama provides an offline analysis tool that does the source-code instrumentation automatically. §4 describes this offline analysis.
3.5 Observation Exchange
Observations submitted to the LOS by a local observer only reflect a partial view of the subject. To reduce bias in observations, Panorama runs a dissemination protocol to propagate observations to, and learn observations from, other LOSes. Consequently, for each monitored subject, the LOS stores observations from multiple observers. The observation exchange in Panorama is only among cliques of LOSes that share a subject. To achieve selective exchange, each LOS keeps a watch list, which initially contains only the local observer. When a local observer reports an observation to the LOS, the LOS will add the observation’s subject to the watch list to indicate that it is now interested in others’ observations about this subject. Each LOS also keeps an ignore list for each subject, which lists LOSes to which it should not propagate new observations about that subject. When a local observation for a new subject appears for the first time, the LOS does a one-time broadcast. LOSes that are not interested in the observation (based on their own watch lists) will instruct the broadcasting LOS to include them in its ignore list. If an LOS later becomes interested in this subject, the protocol ensures that the clique members remove this LOS from their ignore lists.
3.6 Judging Failure from Observations
With numerous observations collected about a subject, Panorama uses a decision engine to reach a verdict and stores the result in the LOS’s verdict table. A simple decision policy is to use the latest observation as the verdict. But, this can be problematic since a subject experiencing intermittent errors may be treated as healthy. An alternative is to reach an unhealthy verdict if there is any recent negative observation. This could cause one biased observer, whose negative observation is due to its own issue, to mislead others.
We use a bounded-look-back majority algorithm, as follows. For a set of observations about a subject, we first group the observations by the unique observer, and analyze each group separately. The observations in a group are inspected from latest to earliest and aggregated based on their associated contexts. For an observation being inspected, if its status is different than the previously recorded status for that context, the look-back of observations for that context stops after a few steps to favor newer statuses. Afterwards, for each recorded context, if either the latest status is unhealthy or the healthy status does not have the strict majority, the verdict for that context is unhealthy with an aggregated severity level.
In this way, we obtain an analysis summary for each context in each group. To reach a final verdict for each context across all groups, the summaries from different observers are aggregated and decided based on a simple majority. Using group-based summaries allows incremental update of the verdict and avoids being biased by one observer or context in the aggregation. The decision engine could use more complex algorithms, but we find that our simple algorithm works well in practice. This is because most observations collected by Panorama constitute strong evidence rather than superficial signals.
The pending status (Section 4.3) needs additional handling: during the look-back for a context, if the current status is healthy and the older status is pending, that older pending status will be skipped because it was only temporary. In other words, that partial observation is now complete. Afterwards, a pending status with occurrences exceeding a threshold is downgraded to unhealthy.
4 Design Pattern and Observability
The effectiveness of Panorama depends on the hooks in observers. We initially designed a straightforward method to insert these hooks. In testing it on real-world distributed systems, however, we found that component interactions in practice can be complex. Certain interactions, if not treated appropriately, will cause the extracted observations to be misleading. In this section, we first show a gray failure that our original method failed to detect, and then investigate the reason behind the challenge.
4.1 A Failed Case
In one incident of a production ZooKeeper service, applications were experiencing many lock timeouts [23]. An engineer investigated the issue by checking metrics in the monitoring system and found that the number of connections per client had significantly increased. It ini-
typically looked like a resource leak in the client library, but the root cause turned out to be complicated.
The production environment used IPSec to secure inter-host traffic, and a Linux kernel module used Intel AES instructions to provide AES encryption for IPSec. But this kernel module could occasionally introduce data corruption with Xen paravirtualization, for reasons still not known today. Typically the kernel validated packet checksums and dropped corrupt packets. But, in IPSec, two checksums exist: one for the IP payload, the other for the encrypted TCP payload. For IPSec NAT-T mode, the Linux kernel did not validate the TCP payload checksum, thereby permitting corrupt packets. These were delivered to the ZooKeeper leader, including a corrupted length field for a string. When ZooKeeper used the length to allocate memory to deserialize the string, it raised an out-of-memory (OOM) exception.
Surprisingly, when this OOM exception happened, ZooKeeper continued to run. Heartbeats were normal and no leader re-election was triggered. When evaluating this incident in Panorama, no failure was reported either. We studied the ZooKeeper source code to understand why this happened. In ZooKeeper, a request is first picked up by the listener thread, which then calls the ZooKeeperServer thread that further invokes a chain of XXXRequestProcessor threads to process the request. The OOM exception happens in the PrepRequestProcessor thread, the first request processor. The ZooKeeperServer thread invokes the interface of the PrepRequestProcessor as follows:
```java
try {
firstProcessor.processRequest(s);
} catch (RequestProcessorException e) {
LOG.error("Unable to process request: " + e);
}
```
If the execution passes line 2, it provides positive evidence that the PrepRequestProcessor thread is healthy. If, instead, the execution reaches line 4, it represents negative evidence about PrepRequestProcessor. But with the Panorama hooks inserted at both places, no negative observations are reported. This is because the implementation of the processRequest API involves an indirect: it simply puts a request in a queue and immediately returns. Asynchronously, the thread polls and processes the queue. Because of this design, even though the OOM exception causes the PrepRequestProcessor thread to exit its main loop, the ZooKeeperServer thread is still able to call processRequest and is unable to tell that PrepRequestProcessor has an issue. The hooks are only observing the status of the indirect layer, i.e., the queue, rather than the PrepRequestProcessor thread. Thus, negative observations only appear when the request queue cannot insert new items; but, by default, its capacity is Integer.MAX_VALUE.
Figure 2: Design patterns of component interactions and their impact on failure observability. ✗ means that failure is observable to the other component, and ✡ means that failure is unobservable to it.
### 4.2 Observability Patterns
Although the above case is a unique incident, we extrapolate a deeper implication for failure detection: certain design patterns can undermine failure observability in a system and thereby pose challenges for failure detection. To reveal this connection, consider two components C₁ and C₂ where C₁ makes requests of C₂. We expect that, through this interaction, C₁ and C₂ should be able to make observations about each other’s status. However, their style of interaction can have a significant effect on this observability.
We have identified the following four basic patterns of interaction (Figure 2), each having a different effect on this observability. Interestingly, we find examples of all four patterns in real-world system software.
(a) **No Indirection.** Pattern (a) is the most straightforward. C₁ makes a request to C₂, then C₂ optionally replies to C₁. This pattern has the best degree of observability: C₁ can observe C₂ from errors in its request path; C₂ can also observe C₁ to some extent in its reply path. Listing 1 shows an example of this pattern. In this case, C₁ is the follower and C₂ is the leader. C₁ first contacts C₂, then C₂ sends C₁ a snapshot or other information through an input stream. Failures are observed via errors or timeouts in the connection, I/O through the input stream, and/or reply contents.
(b) **Request Indirection.** A level of indirection exists in the request path: when C₁ makes a request to C₂, an intermediate layer (e.g., a proxy or a queue) takes the request and replies to C₁. C₂ will later take the request from the intermediate layer, process it, and optionally reply to C₁ directly. This design pattern has a performance benefit for both C₁ and C₂. It also provides decoupling between their two threads. But, because of the indirection, C₁ no longer directly interacts with C₂ so C₂’s observability is reduced. The immediate observation C₁ makes when requesting from C₂ does not reveal whether C₂ is having problems, since usually the request path succeeds as in the case in §4.1.
(c) **Reply Indirection.** Pattern (c) is not intuitive. C₁ makes a request, which is directly handled by C₂, but the reply goes through a layer of indirection (e.g., a queue or a proxy). Thus, C₁ can observe issues in C₂ but C₁’s ob-
Observability to $C_2$ is reduced. One scenario leading to this pattern is when a component makes requests to multiple components and needs to collect more than one of their replies to proceed. In this case, replies are queued so that they can be processed en masse when a sufficient number are available. For example, in Cassandra, when a process sends digest requests to multiple replicas, it must wait for responses from $R$ replicas. So, whenever it gets a reply from a replica, it queues the reply for later processing.
(d) Full indirection. In pattern (d), neither component directly interacts with the other so they get the least observability. This pattern has a performance benefit since all operations are asynchronous. But, the code logic can be complex. ZooKeeper contains an example: When a follower forwards a request to a leader, the request is processed asynchronously, and when the leader later notifies the follower to commit the request, that notification gets queued.
### 4.3 Implications
Pattern (a) has the best failure observability and is easiest for Panorama to leverage. The other three patterns are more challenging; placing observation hooks without considering the effects of indirection can cause incompleteness (though not inaccuracy) in failure detection (§2). That is, a positive observation will not necessarily mean the monitored component is healthy but a negative observation means the component is unhealthy. Pragmatically, this would be an acceptable limitation if the three indirection patterns were uncommon. However, we checked the cross-thread interaction code in several distributed systems and found, empirically, that patterns (a) and (b) are both pervasive. We also found that different software has different preferences, e.g., ZooKeeper uses pattern (a) frequently, but Cassandra uses pattern (b) more often.
This suggests Panorama should accommodate indirection in extracting observations. One solution is to instrument hooks in the indirection layer. But, we find that indirection layers in practice are implemented with various data structures and are often used for multiple purposes, making tracking difficult. We use a simple but robust solution and describe it in §5.4.
### 5 Observability Analysis
To systematically identify and extract useful observations from a component, Panorama provides an offline tool that statically analyzes a program’s source code, finds critical points, and injects hooks for reporting observations.
#### 5.1 Locate Observation Boundary
Runtime errors are useful evidence of failure. Even if an error is tolerated by a requestor, it may still indicate a critical issue in the provider. But, not all errors should be reported. Panorama only extracts errors generated when crossing component boundaries, because these constitute observations from the requester side. We call such domain-crossing function invocations obser-vation boundaries.
The first step of observability analysis is to locate observation boundaries. There are two types of such boundaries: inter-process and inter-thread. An inter-process boundary typically manifests as a library API invocation, a socket I/O call, or a remote procedure call (RPC). Sometimes, it involves calling into custom code that encapsulates one of those three to provide a higher-level messaging service. In any case, with some domain knowledge about the communication mechanisms used, the analyzer can locate inter-process observation boundaries in source code. An inter-thread boundary is a call crossing two threads within a process. The analyzer identifies such boundaries by finding custom public methods in classes that extend the thread class.
#### 5.2 Identify Observer and Observed
At each observation boundary, we must identify the observer and subject. Both identities are specific to the distributed system being monitored. For thread-level observation boundaries, the thread identities are statically analyzable, e.g., the name of the thread or class that provides the public interfaces. For process-level boundaries, the observer identity is the process’s own identity in the distributed system, which is known when the process starts; it only requires one-time registration with Panorama. We can also usually identify the subject identity, if the remote invocations use well-known methods, via either an argument of the function invocation or a field in the class. A challenge is that sometimes, due to nested polymorphism, the subject identity may be located deep down in the type hierarchy. For example, it is not easy to determine if `OutputStream.write()` performs network I/O or local disk I/O. We address this challenge by changing the constructors of remote types (e.g., socket get I/O stream) to return a compatible wrapper that extends the return type with a subject field and can be differentiated from other types at runtime by checking if that field is set.
#### 5.3 Extract Observation
Once we have observation boundaries, the next step is to search near them for observation points: program points that can supply critical evidence about observed components. A typical example of such an observation point is
an exception handler invoked when an exception occurs at an observation boundary.
To locate observation points that are exception handlers, a straightforward approach is to first identify the type of exceptions an observation boundary can generate, then locate the catch clauses for these types in code regions after the boundary. There are two challenges with this approach. First, as shown in Figure 3, an exception could be caught at the caller or caller’s caller. Recursively walking up the call chain to locate the clause is cumbersome and could be inaccurate. Second, the type of exception thrown by the boundary could be a generic exception such as IOException that could be generated by other non-boundary code in the same try clause. These two challenges can be addressed by inserting a try just before the boundary and a catch right after it. This works but, if the observation boundaries are frequent, the excessive wrapping can cause non-trivial overhead.
The ideal place to instrument is the shared exception handler for adjacent invocations. Our solution is to add a special field in the base Throwable class to indicate the subject identity and the context, and to ensure boundary-generated exceptions set this field. Then, when an exception handler is triggered at runtime, we can check if this field is set, and if so treat it as an observation point. We achieve the field setting by wrapping the outermost function body of each boundary method with a try and catch, and by rethrowing the exception after the hook. Note that this preserves the original program semantics.
Another type of observation point we look for is one where the program handles a response received from across a boundary. For example, the program may raise an exception for a missing field or wrong signature in the returned DataNode in Figure 3, indicating potential partial failure or corrupt state in the remote process. To locate these observation points, our analyzer performs intra-procedural analysis to follow the data flow of responses from a boundary. If an exception thrown is control-dependent on the response, we consider it an observation point, and we insert code to set the subject/context field before throwing the exception just as described earlier. This data-flow analysis is conservative: e.g., the code if (a + b > 100) {throw Exception("unexpected");}, where a comes from a boundary but b does not, is not considered an observation point because the exception could be due to b. In other words, our analysis may miss some observation points but will not locate wrong observation points.
So far, we have described negative observation points, but we also need mechanisms to make positive observations. Ideally, each successful interaction across a boundary is an observation point that can report positive evidence. But, if these boundaries appear frequently, the positive observation points can be excessive. So, we coalesce similar positive observation points that are located close together.
For each observation point, the analyzer inserts hooks to discover evidence and report it. At each negative observation point, we get the subject identity and context from the modified exception instance. We statically choose the status; if the status is to be some level of UNHEALTHY then we set this level based on the severity of the exception handling. For example, if the exception handler calls System.exit(), we set the status to a high level of UNHEALTHY. At each positive observation point, we get the context from the nearby boundary and also statically choose the status. We immediately report each observation to the Panorama library, but the library will typically not report it synchronously. The library will buffer excessive observations and send them in one aggregate message later.
5.4 Handling Indirection
As we discussed in §4, observability can be reduced when indirection exists at an observation boundary. For instance, extracted observations may report the subject as healthy while it is in fact unhealthy. The core issue is that indirection splits a single interaction between components among multiple observation boundaries. A successful result at the first observation boundary may only indicate partial success of the overall interaction; the interaction may only truly complete later, when, e.g., a callback is invoked, or a condition variable unlocks, or a timeout occurs. We must ideally wait for an interaction to complete before making an observation.
We call the two locations of a split interaction the ob-origin and ob-sink, reflecting the order they’re encountered. Observations at the ob-origin represent positive but temporary and weak evidence. For example, in Figure 4, the return from sendRR is an ob-origin. Where the
```java
void deserialize(DataNode dt, InputArchive ia) {
DataNode node = ia.readRecord("node");
if (node.parent == null) {
LOG.error("Missing parent.");
throw new IOException("Invalid Datatree");
}
dt.addData(node);
try {
deserialize(getDataTree(), ia);
} catch (IOException e) {
Ob-Point
sock.close();
}
}
```
Figure 3: Observation points in direct interaction (§4.2).
Figure 4: Observation points when indirection exists (§4.2).
callback of handler, response, is invoked, it is an ob-sink. In addition, when the program later blocks waiting for the callback, e.g., handler.get, the successful return is also an ob-sink. If an ob-origin is properly matched with an ob-sink, the positive observation becomes complete and strong. Otherwise, an indistinguishable ob-origin is only a weak observation and may degrade to a negative observation, e.g., when handler.get times out.
Tracking an interaction split across multiple program locations is challenging given the variety of indirection implementations. To properly place hooks when indirection exists, the Panorama analyzer needs to know what methods are asynchronous and the mechanisms for notification. For instance, a commonly used one is Java `FutureTask` [40]. For custom methods, this knowledge comes from specifications of the boundary-crossing interfaces, which only requires moderate annotation. With this knowledge, the analyzer considers an ob-origin to be immediately after any call site of an asynchronous interface. We next discuss how to locate ob-sinks.
We surveyed the source code of popular distributed systems and found the majority of ob-sinks fall into four patterns: (1) invoking a callback-setting method; (2) performing a blocking wait on a callback method; (3) checking a completion flag; and (4) reaching another observation boundary with a third component, in cases when a request must be passed on further. For the first two patterns, the analyzer considers the ob-sink to be before and after the method invocation, respectively. For the third pattern, the analyzer locates the spin-loop body and considers the ob-sink to be immediately after the loop. The last pattern resembles SEDA [48]: after A asynchronously sends a request to B, B does not notify A of the status after it finishes but rather passes on the request to C. Therefore, for that observation boundary in B, the analyzer needs to not only insert a hook for C but also treat it as an ob-sink for the A-to-B interaction.
When our analyzer finds an ob-origin, it inserts a hook that submits an observation with the special status `PENDING`. This means that the observer currently only sees weak positive evidence about the subject’s status, but expects to receive stronger evidence shortly. At any ob-origin indicating positive evidence, our analyzer inserts a hook to report a `HEALTHY` observation. At any ob-origin indicating negative evidence, the analyzer inserts a hook to report a negative observation.
To link an ob-sink observation with its corresponding ob-origin observation, these observations must share the same subject and context. To ensure this, the analyzer uses a similar technique as in exception tracking. It adds a special field containing the subject identity and context to the callback handler, and inserts code to set this field at the ob-origin. If the callback is not instrumentable, e.g., because it is an integer resource handle, then the analyzer inserts a call to the Panorama library to associate the handle with an identity and context.
Sometimes, the analyzer finds an ob-origin but cannot determine which ob-sink or cannot extract the subject identity or context. This can happen due to either lack of knowledge or the developers having forgotten to check for completion in the code. In such a case, the analyzer will not instrument the ob-origin, to avoid making misleading `PENDING` observations.
We find that ob-origin and ob-sink separation is useful in detecting not only issues involving indirection but also liveness issues. To see why, consider what happens when A invokes a boundary-crossing blocking function of B, and B gets stuck so the function never returns. When this happens, even though A witnesses B’s problem, it does not get a chance to report the issue because it never reaches the observation point following the blocking call. Inserting an ob-origin before the function call provides evidence of the liveness issue: LOSeS will see an old `PENDING` observation with no subsequent corresponding ob-sink observation. Thus, besides asynchronous interfaces, call sites of synchronous interfaces that may block for long should also be included in the ob-origin set.
### 6 Implementation
We implemented the Panorama service in ~6,000 lines of Go code, and implemented the observability analyzer (§5) using the Soot analysis framework [46] and the AspectJ instrumentation framework [2].
We defined Panorama’s interfaces using protocol buffers [7]. We then used the gRPC framework [5] to build the RPC service and to generate clients in different languages. So, the system can be easily used by various components written in different languages. Panorama provides a thin library that wraps the gRPC client for
efficient observation reporting; each process participating in observation reporting is linked with this library. The thin library provides features such as asynchronous reporting, buffering and aggregation of frequent observations, identity resolution, rate limiting, quick cancellation of PENDING statuses, and mapping of ob-sink handles (§5.4). So, most operations related to observation reporting do not directly trigger local RPC calls to Panorama; this keeps performance impact low.
7 Evaluation
In this section, we evaluate our Panorama prototype to answer several key questions: (1) Can observations be systematically captured? (2) Can observation capturing detect regular failures? (3) Can Panorama detect production gray failures? (4) How do transient failures affect Panorama? (5) How much overhead does an observer incur by participating in the Panorama service?
7.1 Experiment Setup
We run our experiments in a cluster of 20 physical nodes. Each machine has a 2.4 GHz 10-core Intel Xeon E5-2640v4 CPU, 64 GB of RAM, and a 480 GB SATA SSD; they all connect to a single 10 Gbps Ethernet switch. They run Ubuntu 16.04 with Linux kernel version 4.4.0. We evaluate Panorama with four widely-used distributed systems: ZooKeeper, Hadoop, HBase, and Cassandra. HBase uses HDFS for storing data and ZooKeeper for coordination, so an HBase setup resembles a service with multiple subsystems. We continuously exercise these services with various benchmark workloads to represent an active production environment.
7.2 Integration with Several Systems
Panorama provides a generic observation and failure detection service. To evaluate its generality, we apply it to ZooKeeper, HDFS, Hadoop, HBase, and Cassandra, at both process and thread level. The integration is successful without significant effort or changes to the system design. Our simple abstractions and APIs (§3.2) naturally support various types of failure evidence in each system. For instance, we support semantic errors, such as responses with missing signatures; generic errors, such as remote I/O exceptions; and liveness issues, such as indefinite blocking or custom time-outs. The integration is enabled by the observability analyzer (§5). In applying the analyzer to a system, we need annotations about what boundary-crossing methods to start with, what methods involve indirection, and what patterns it uses (§5.4). The annotation effort to support this is moderate (Table 2).
HDFS requires the most annotation effort, which took one author about 1.5 days to understand the HDFS source code, identify the interfaces and write annotation specification. Fortunately, most of these boundary-crossing methods remain stable over releases. When running the observability analysis, Cassandra is more challenging to analyze compared to the others since it frequently uses indirection. On the other hand, its mechanisms are also well-organized, which makes the analysis systematic. The observability analysis is mainly intra-procedural and can finish instrumentation within 10 seconds for each of the four systems (Table 2). Figure 5 shows the observations collected from two instrumented processes in ZooKeeper. The figure also shows that the observations made change as the observer executes, and depend on the process’s interaction patterns.
7.3 Detection of Crash Failures
Panorama aims to detect complex failures not limited to fail-stop. As a sanity check on the effectiveness of its detection capability, we first evaluate how well Panorama detects fail-stop failures. To measure this, we inject various fail-stop faults including process crashes, node shutdowns, and network disconnections. Table 3 shows the detection time for ten representative crash-failure cases: failures injected into the ZooKeeper leader, ZooKeeper follower, Cassandra data node, Cassandra seed node, HDFS name node, HDFS data node, HBase master and HBase regionserver. We see that with Panorama the observers take less than 10 s to detect all ten cases, and indeed take less than 10 ms to detect all ZooKeeper failures. The observers make the observations leading to these detections when, while interacting with the
<table>
<thead>
<tr>
<th>System</th>
<th>Annotations</th>
<th>Analysis Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>ZooKeeper</td>
<td>24</td>
<td>4.2</td>
</tr>
<tr>
<td>Cassandra</td>
<td>34</td>
<td>6.8</td>
</tr>
<tr>
<td>HDFS</td>
<td>65</td>
<td>9.9</td>
</tr>
<tr>
<td>HBase</td>
<td>16</td>
<td>7.5</td>
</tr>
</tbody>
</table>
Table 2: Annotations and analysis time (in seconds).
failed components, they experience either request/response time-outs or I/O exceptions.
As a basis for comparison, we also measure failure detection time when using the failure detectors built into these systems. We find that for ZooKeeper, Panorama detects the failures slightly faster than the built-in detector, while for Cassandra, HDFS datanode and HBase master, Panorama achieves much faster detection time. This is because, to tolerate asynchrony, Cassandra and HDFS use conservative settings for declaring failures based on loss of heartbeats. For HDFS namenode, we use a High-Availability setup that leverages ZooKeeper for failure detection (when a ZooKeeper ephemeral node expires). Under this setup, the built-in detector achieves a slightly faster time than Panorama because the ZooKeeper service is co-located with HDFS, whereas Panorama’s detection is from observations made by remote datanodes.
### 7.4 Detection of Gray Failures
To evaluate Panorama’s ability to detect complex failures, we reproduce 15 real-world production gray failures from ZooKeeper, HDFS, HBase, and Cassandra, described in Table 4. Each of these caused severe service disruption, e.g., all write requests would fail. Worse still, in each case the system was perceived as healthy, so no recovery actions were taken during the resulting outage.
Panorama is able to detect the gray failure for all 15 cases. Figure 6 shows Panorama’s detection time (in seconds) for each case. We often find that a failure is observed and reported by multiple observers; we use the first failure observation’s timestamp in a final verdict as the detection time. The detection times have a minimum of 0.2 s and a maximum of 7 s, with the majority smaller than 3 s. The intra-process observers tend to capture failure evidence faster than the inter-process observers. For all cases, the failure evidence clearly stands out in the observations collected about the sick process, so the decision algorithm (§3.6) requires no special tuning.
We compare Panorama with three baselines: the system’s built-in failure detector, Falcon [34], and the ϕ accrual detector [29]. As shown in Figure 6, in all but one case, no baseline detects the gray failure within 300 s. That one case is f9, where Cassandra’s built-in detector, a form of the ϕ detector with some application state, reports failure after 86 s when the partial fault of the Cassandra commitlog executor component eventually degrades into a complete failure due to uncommitted writes piling up on the JVM heap and causing the process to spend most of its time doing garbage collection.
Figure 7 shows a detailed timeline of the detection of gray failure f1. We see that the observers (in this case the followers) quickly gather failure evidence while interacting with the unhealthy leader. Also, when the leader’s fault is gone, those observers quickly gather positive evi-
---
Table 3: Crash-failure detection time. *The name node marks the data node stale in 30 s, and dead in 12 min.
<table>
<thead>
<tr>
<th>Detector</th>
<th>ZooKeeper</th>
<th>Cassandra</th>
<th>HDFS</th>
<th>HBase</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>leader</td>
<td>seed</td>
<td>namenode</td>
<td>master</td>
</tr>
<tr>
<td></td>
<td>follower</td>
<td>datanode</td>
<td>datanode</td>
<td>regionserver</td>
</tr>
<tr>
<td>Built-in</td>
<td>13 ms</td>
<td>28 s</td>
<td>708 ms</td>
<td>11 s</td>
</tr>
<tr>
<td>Panorama</td>
<td>8 ms</td>
<td>8 s</td>
<td>723 ms</td>
<td>1.5 s</td>
</tr>
</tbody>
</table>
Figure 6: Detection time for gray failures in Table 4.
Table 4: Evaluated real-world gray failures. In all cases, some severe service disruption occurred (e.g., all create requests failed) while the failing component was perceived to be healthy.
<table>
<thead>
<tr>
<th>ID</th>
<th>System</th>
<th>Fault Synopsis</th>
</tr>
</thead>
<tbody>
<tr>
<td>f1</td>
<td>ZooKeeper</td>
<td>faulty disk in leader causes cluster lock-up</td>
</tr>
<tr>
<td>f2</td>
<td>ZooKeeper</td>
<td>transient network partition leads to prolonged failures in serving requests</td>
</tr>
<tr>
<td>f3</td>
<td>ZooKeeper</td>
<td>corrupted packet in de-serialization</td>
</tr>
<tr>
<td>f4</td>
<td>ZooKeeper</td>
<td>transaction thread exception</td>
</tr>
<tr>
<td>f5</td>
<td>ZooKeeper</td>
<td>leader fails to write transaction log</td>
</tr>
<tr>
<td>f6</td>
<td>Cassandra</td>
<td>response drop blocks repair operations</td>
</tr>
<tr>
<td>f7</td>
<td>Cassandra</td>
<td>stale data in leads to wrong node states</td>
</tr>
<tr>
<td>f8</td>
<td>Cassandra</td>
<td>streaming silently fail on unexpected error</td>
</tr>
<tr>
<td>f9</td>
<td>Cassandra</td>
<td>commitlog executor exit causes GC storm</td>
</tr>
<tr>
<td>f10</td>
<td>HDFS</td>
<td>thread pool exhaustion in master</td>
</tr>
<tr>
<td>f11</td>
<td>HDFS</td>
<td>failed pipeline creation prevents recovery</td>
</tr>
<tr>
<td>f12</td>
<td>HDFS</td>
<td>short circuit reads blocked due to death of domain socket watcher</td>
</tr>
<tr>
<td>f13</td>
<td>HDFS</td>
<td>blockpool fails to initialize but continues</td>
</tr>
<tr>
<td>f14</td>
<td>HBase</td>
<td>dead root drive on region server</td>
</tr>
<tr>
<td>f15</td>
<td>HBase</td>
<td>replication stalls with empty WAL files</td>
</tr>
</tbody>
</table>
dence that clears the failure observation. During the failure period, no other baseline reports failure. Figure 7 also shows the view from a ZooKeeper client that we run continuously throughout the experiment as a reference. We can see Panorama’s reporting closely matches the experience of this client. Interestingly, since the gray failure mainly impacts write requests but the client executes a mixture of read and write requests, its view is not very stable; nevertheless, Panorama consistently reports a verdict of UNHEALTHY during the failure period.
7.5 Fault Localization
In addition to detecting the 15 production failures quickly, Panorama also pinpoints each failure with detailed context and observer (§3.2) information. This localization capability allows administrators to interpret the detection results with confidence and take concrete actions. For example, in detecting the crash failure in the ZooKeeper follower, the verdict for the leader is based on observations such as ${|peer@3,peer@5,peer@8|} 2018-03-23T02:28:58.873 {Learner: U,RecvWorker: U,QuorumCnxManager: U}$, which identify the observer as well as the contexts Learner, RecvWorker, and QuorumCnxManager. In detecting gray failure f1, the negative observations of the unhealthy leader are associated with three contexts SerializeUtils, DataTree, and StatPersisted; this localizes the failure to the serialization thread in leader.
7.6 Transient Failure, Normal Operations
Because Panorama can gather observations from any component in a system, there is a potential concern that noisy observations will lead to many false alarms. But, empirically, we find that this does not happen. The Panorama analyzer assigns the context of an observation properly to avoid falsely aggregating observations made in interacting with different functionalities of a complex process. The simple decision algorithm in §3.6 is robust enough to prevent a few biased observers or transient failures from dominating the verdict. Figure 8 shows the verdict for the ZooKeeper leader in an experiment. A few followers report transient faults about the leader in one context, so Panorama decides on a negative verdict. But, within a few seconds, the verdict changes due to positive observations and expiration of negative observations. Panorama then judges the leader as healthy for the remainder of the experiment, which matches the truth.
We deploy Panorama with ZooKeeper and run for 25 hours, during which multiple ZooKeeper clients continuously run various workloads non-stop to emulate normal operations in a production environment. In total, Panorama generates 797,219 verdicts, with all but 705 (0.08%) of them being HEALTHY; this is a low false alarm rate. In fact, all of the negative observations are made in the first 22 seconds, during which the system is bootstrapping and unstable. After the 22 seconds, no negative observations are reported for the remaining 25 hours.
We also inject minor faults including overloaded component, load spike and transient network partition that are modeled after two production ZooKeeper and HDFS traces. These minor faults do not affect the regular service. We find Panorama overall is resilient to these noises in reaching a verdict. For example, an overloaded ZooKeeper follower made a series of misleading obser-
vations that the leader is UNHEALTHY. But these biased observations from a single observer did not result in a verdict of UNHEALTHY status for the leader. When there were many such overloaded followers, however, the leader was falsely convicted as UNHEALTHY even though the actual issues were within the observers.
### 7.7 Performance
Table 5 shows microbenchmark results: how long four major operations in Panorama take on average. Reporting an observation to Panorama only requires a local RPC, so the average latency of reporting is fast (around 100 µs). And, the asynchronous API for reporting takes even less time: on average less than 1 µs. Propagation of an observation to another Panorama instance takes around 800 µs. Figure 9 shows how the propagation latency changes as the cluster size increases.
When a Panorama instance is active, the CPU utilization attributable to it is on average 0.7%. For each monitored subject, the number of observations kept in LOS is bounded so the memory usage is close to a constant. Thus, the total memory usage depends on the number of monitored subjects. When we measure the ZooKeeper deployment with Panorama, and find that the heap memory allocation stabilizes at ~7 MB for a moderately active instance, and at ~46 MB for a highly active instance. The network bandwidth usage of Panorama instance for exchanging observations is small compared to the bandwidth usage of the monitored components (Figure 10).
We test the end-to-end request latency and throughput impact of integrating with Panorama for HDFS, ZooKeeper, HBase, and Cassandra, using YCSB [16], DFSIO and a custom benchmark tool. Table 6 shows the results. The latency increase and throughput decrease for each system is below 3%. We achieve this low overhead because the reporting API is fast and because most hooks are in error-handling code, which is not triggered in normal operation. The positive-observation hooks lie in the normal execution path, but their cost is reduced by coalescing the hooks with the analyzer (§5.3) and batching the reporting with the thin client library. Without this optimization, the performance overhead can be up to 18%.
8 Discussion and Limitations
Panorama proposes a new way of building failure detection service by constructing in-situ observers. The evaluation results demonstrate the effectiveness of leveraging observability for detecting complex production failures. The process of integrating Panorama with real-world distributed systems also makes us realize how the diverse programming paradigms affect systems observability. For example, HDFS has a method `createBlockOutputStream` that takes a list of data nodes as argument and creates a pipeline among them; if this method fails, it indicates one of the data nodes in the pipeline is problematic. From observability point of view, if a negative evidence is observed through this method, it is associated with multiple possible subjects. Fortunately, an errorIndex variable is maintained internally to indicate which data node causes the error, which can be used to determine the exact subject. It is valuable to investigate how to modularize a system and design its interfaces to make it easier to capture failure observability.
There are several limitations of Panorama that we plan to address in future work. First, Panorama currently focuses on failure detection. To improve end-to-end availability, we plan to integrate the detection results with failure recovery actions. Second, Panorama currently does not track causality. Enhancing observations with causality information will be useful for correctly detecting and pinpointing failing components in large-scale cascading failures. Third, we plan to add support for languages other than Java to the Panorama analyzer, and evaluate it with a broader set of distributed systems.
9 Related Work
Failure Detection. There is an extensive body of work on studying and improving failure detection for distributed systems [8, 13, 14, 20, 29, 47]. A recent prominent work in this space is Falcon [34], in which the authors argue that a perfect failure detector (PFD) can be built [9] by replacing end-to-end timeouts with layers of spies that can kill slow processes. Panorama is complimentary to these efforts, which mainly focus on detecting crash failures. Panorama’s goal is to detect complex production failures [11, 25, 30]. In terms of approach, Panorama is unique in enhancing system observability by constructing in-situ observers in place of any component’s code, instead of using dedicated detectors such as spies or sensors that are outside components’ normal execution paths.
Monitoring and Tracing. Improving monitoring and tracing of production systems is also an active research area. Examples include Magpie [12], X-Trace [21], Dapper [45] and Pivot Tracing [35]. The pervasive metrics collected by these systems enhance system observability, and their powerful tracing capabilities may help Panorama better deal with the indirection challenge (§4). But they are massive and difficult to reason about [15, 37, 44]. Panorama, in contrast, leverages errors and exceptions generated from an observer’s normal execution to report complex but serious failures.
Accountability. Accountability is useful for detecting Byzantine component behavior in a distributed system [28, 51]. PeerReview [27] provides accountability by having other nodes collecting evidence about the correctness of a node through their message exchanges. Panorama’s approach is inspired by PeerReview in that it also leverages evidence about other components in a system. But Panorama mainly targets production gray failures instead of Byzantine faults. Unlike PeerReview, Panorama places observability hooks in the existing code of a component and does not require a reference implementation or a special protocol.
10 Conclusion
We present Panorama, a system for detecting production failures in distributed systems. The key insight enabling Panorama is that system observability can be enhanced by automatically turning each component into an observer of the other components with which it interacts. By leveraging these first-hand observations, a simple detection algorithm can achieve high detection accuracy. In building Panorama, we further discover observability patterns and address the challenge of reduced observability due to indirection. We implement Panorama and evaluate it, showing that it introduces minimal overhead to existing systems. Panorama can detect and localize 15 real-world gray failures in less than 7 s, whereas existing detectors only detect one of them under 300 s. The source code of Panorama system is available at https://github.com/ryanhuang/panorama.
Acknowledgments
We thank the OSDI reviewers and our shepherd, Ding Yuan, for their valuable comments that improved the paper. We appreciate the support from CloudLab [43] for providing a great research experiment platform. We also thank Yezhuo Zhu for sharing ZooKeeper production traces and Jinfeng Yang for sharing HDFS production traces. This work was supported in part by a Microsoft Azure Research Award.
References
|
{"Source-Url": "https://www.usenix.org/system/files/osdi18-huang.pdf", "len_cl100k_base": 13090, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 59571, "total-output-tokens": 16994, "length": "2e13", "weborganizer": {"__label__adult": 0.0003120899200439453, "__label__art_design": 0.00041103363037109375, "__label__crime_law": 0.0003294944763183594, "__label__education_jobs": 0.0007176399230957031, "__label__entertainment": 0.00011456012725830078, "__label__fashion_beauty": 0.0001475811004638672, "__label__finance_business": 0.00038313865661621094, "__label__food_dining": 0.00034046173095703125, "__label__games": 0.0006890296936035156, "__label__hardware": 0.0015554428100585938, "__label__health": 0.0004949569702148438, "__label__history": 0.0003807544708251953, "__label__home_hobbies": 9.22083854675293e-05, "__label__industrial": 0.0004429817199707031, "__label__literature": 0.00032210350036621094, "__label__politics": 0.00033211708068847656, "__label__religion": 0.00041604042053222656, "__label__science_tech": 0.11676025390625, "__label__social_life": 0.00010091066360473631, "__label__software": 0.0219879150390625, "__label__software_dev": 0.8525390625, "__label__sports_fitness": 0.000217437744140625, "__label__transportation": 0.0005011558532714844, "__label__travel": 0.00022125244140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73653, 0.03012]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73653, 0.42149]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73653, 0.8898]], "google_gemma-3-12b-it_contains_pii": [[0, 655, false], [655, 4955, null], [4955, 8479, null], [8479, 13126, null], [13126, 17974, null], [17974, 23356, null], [23356, 28633, null], [28633, 33812, null], [33812, 39029, null], [39029, 43871, null], [43871, 48356, null], [48356, 52962, null], [52962, 56289, null], [56289, 58458, null], [58458, 63440, null], [63440, 69246, null], [69246, 73653, null]], "google_gemma-3-12b-it_is_public_document": [[0, 655, true], [655, 4955, null], [4955, 8479, null], [8479, 13126, null], [13126, 17974, null], [17974, 23356, null], [23356, 28633, null], [28633, 33812, null], [33812, 39029, null], [39029, 43871, null], [43871, 48356, null], [48356, 52962, null], [52962, 56289, null], [56289, 58458, null], [58458, 63440, null], [63440, 69246, null], [69246, 73653, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73653, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73653, null]], "pdf_page_numbers": [[0, 655, 1], [655, 4955, 2], [4955, 8479, 3], [8479, 13126, 4], [13126, 17974, 5], [17974, 23356, 6], [23356, 28633, 7], [28633, 33812, 8], [33812, 39029, 9], [39029, 43871, 10], [43871, 48356, 11], [48356, 52962, 12], [52962, 56289, 13], [56289, 58458, 14], [58458, 63440, 15], [63440, 69246, 16], [69246, 73653, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73653, 0.10432]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
bb71a4990cb62e0bc203301241c2c3e3df893f17
|
stichting
mathematisch
centrum
AFDELING INFORMATICA
(DEPARTMENT OF COMPUTER SCIENCE)
L.G.L.T. MEERTENS
DEFINITION OF AN ABSTRACT ALGOL 68 MACHINE
kruislaan 413 1098 SJ amsterdam
Printed at the Mathematical Centre, 413 Kruislaan, Amsterdam.
The Mathematical Centre, founded the 11-th of February 1946, is a non-profit institution aiming at the promotion of pure mathematics and its applications. It is sponsored by the Netherlands Government through the Netherlands Organization for the Advancement of Pure Research (Z.W.O.).
Definition of an abstract ALGOL 68 machine
by
L.G.L.T. Meertens
ABSTRACT
This report contains the definition of a machine-independent abstract machine, the "MIAM", whose code may serve as the target code for a portable ALGOL 68 compiler. Implementing ALGOL 68 with the MIAM entails two steps: implementing ALGOL 68 in terms of the MIAM, and implementing the MIAM in terms of an actual computer. This report defines only the "core" of the MIAM, which is sufficient to model all actions prescribed in the sections headed with "Semantics" of the Revised Report, with the exception of the widening coercions and the denotations.
KEY WORDS & PHRASES: ALGOL 68, abstract machine, compiler, portability
0. INTRODUCTION
This report contains the definition of the "MIAM" ("Machine-Independent Abstract Machine"), whose code may serve as the target code for a portable ALGOL 68 compiler. The philosophy that has governed the design of the MIAM, and notably the "cut principle", have been described in [1] and will not be repeated here. In making this definition available, it is hoped that it may also be instructive in the task of creating an ALGOL 68 compiler for a fixed target machine.
The definition given here, just as the ALGOL 68 Revised Report [3] (some familiarity with which is assumed), is not easy to read. Since the definition of a MIAM is a contract (although not with legal standing), it aims at a level of precision that is threatened by the informality required for an enjoyable exposition. Worse even, any reliance on whatever assumption, however reasonable by itself, as to how the MIAM will be used in translating ALGOL 68 programs, but that cannot be rigorously deduced from the definition proper, destroys the cut principle immediately.
However, some considerations as to why particular solutions have been chosen, suggestions for possible implementation approaches and other hopefully helpful remarks are incorporated in the text by placing them between "double pragmatic brackets", viz., "{" and "}". These remarks should in no way be construed to be part of the definition {{although they may be helpful to show the intended meaning in the case of shortcomings in the definition}}.
Implementing ALGOL 68 with the MIAM entails two steps: implementing ALGOL 68 in terms of the MIAM, and implementing the MIAM in terms of an actual computer. In order to reduce confusion, the term "translation" will be used for the former, and "realization" for the latter step.
The definition given in this report defines only the "core" of the MIAM. For a complete definition, a large number of relatively simple instructions have to be added, e.g., to deal with the numerous operations defined in the Standard Prelude. The core defined here is sufficient to model all actions prescribed in the sections headed with "Semantics" of the Re-
vised Report, with the exception of the widening coercions and the denota-
tions.
One caveat is in order. The MIAM described here has not been tested in an actual effort to translate ALGOL 68, nor has it been realized on an ac-
tual computer. Such an effort is bound to bring problems to light that have not been foreseen in the design phase.
1. THE MACHINE AND THE PROGRAM
1.1. Tokens
a) A "token" is a primitive entity. Tokens that are named with different names are different.
{[Tokens serve as literals without inherent meaning or internal struc-
ture by virtue of which it is possible to discriminate certain entities.]}
a) A "span" is a nonnegative integer. {{The term "span" is introduced to avoid confusion with the ALGOL 68 term "size", which has a totally different meaning. Since integers may be used in the MIAM for index calculations, they correspond to the integers of "size 0" (i.e., having the mode INT) from ALGOL 68.}}
b) An "area" has a span s and a "key" k, and is then composed of a sequence of s contiguous {{memory}} "cells", selected by "pointers", which are denoted k·0, k·1, ..., k·(s-1).
{{Areas come into being as the result of a GEN or EST-instruction. It is not further specified here what cells are, but in the realization they should correspond to the smallest units of memory that are addressable in an efficient way. On byte-oriented machines, this will be a byte. The key of an area and the pointer of a cell both correspond, in the realization, to its address. The main reason for maintaining a distinction between "keys" and "pointers" is that the operations: "given the pointer k·i, determine the corresponding address" and "given the pointer k·i, determine the key k" must both be efficiently realizable; the latter to make garbage collection tolerably efficient. A simple way is to represent the pointer k·i by a pair <address(k), address(k)+i>, although this requires more memory for a pointer. Under this realization scheme, a key may be represented more succinctly than a pointer.}}
c) Each cell is uniquely determined by its pointer, and vice versa, that is, k·i₁ and k·i₂ select the same cell if and only if k₁ = k₂ and i₁ = i₂.
{{The effect of garbage collection and compaction is transparent: although, in a realization of the MIAM, cells may be physically moved, the corresponding representations of keys and pointers are accordingly updated. This requires, of course, that the realization of the MIAM is able to keep track of all pointers.}}
d) The key of an area is said to "access" that area, and the pointers of an area are also said to "access" that area [(although the pointers point to sites in the area)].
e) If p = k:i, where p is a pointer and k is a key, then p:j stands for k:(i+j); it is "required" [(1.7.1.f)] that 0 ≤ i+j < s, where s is the span of the area accessed by k.
f) Areas have a "scope", which is a nonnegative integer. The scope "of" a key or pointer is the scope of the area that it accesses [(and the scope of other objects is the largest (i.e., "newest") scope of component keys or pointers)].
"This scope is akin to the ALGOL 68 scope; it indicates the nesting of lifetimes of areas, and thereby of objects residing in the areas. By defining the scope of objects in terms of the scope of areas, it is sufficient to remember, in a realization, one scope per area, instead of per object, reducing the memory requirements. A price is paid in that it may be necessary to keep a whole area because of one object that may not be relinquished, as in the presumable translation of REF INT X = (HEAP [large] INT) [1].]"
g) A copy of an "object" [(1.5.a)] of a "type" [(1.4.a)] <k, a, l, u, m> "occupies" a "site", "appointed" by a pointer p = q-a which is "suitable" for that type, and the site then consists of the cells selected by q·l, ..., q·u.
"The so-called "dynamic parts" are not considered part of the object. In the translation, they are represented by pointers.]) If q is k·i, where k is the key of an area with span s, then p is suitable for the type if i is a multiple of m, 0 ≤ i+l and i+u ≤ s. The object may be denoted, in a context where the type is known, by #p. [(If the type is not known, the denotation "#p" is ambiguous.)]
h) New copies may be made to occupy sites, thereby obliterating (parts of) former copies, if any. If x denotes an object of known type, x =\# p stands for the action by which a new copy of the object x is made to occupy the site appointed by p; it is required \[{1.7.1.f}\] that p be suitable for x and that the scope of x be at least that of p. Moreover, if x is a label, it is required that x not be abortive \[{1.7.1.d}\].
\[{\text{It can be shown that a key occupying a site, or held in T or S, must be the key of a locale (1.6.a). The GEN-instruction (3.3.a) for creating an area that is not a locale does not make its key available, and it is impossible to retrieve the key of an area from a pointer accessing it by means of MIAM-instructions.}}\]
i) The phrase "p appoints a site occupied by a copy of x" may be shortened to "p points to x".
j) The objects "contained in" an area are the objects pointed to by pointers of the area. \[{\text{Care should be taken not to confuse the pointers "of" an area, i.e., selecting a cell of the area, and the pointers "contained in" an area.}}\]
k) Apart from the "proper" pointers introduced in section 1.3.b, there exists a \{unique\} dummy pointer, the token "Nix", which does not access any area \{and thus does not select any cell\}, and which is unsuitable for any type. The scope of Nix is 0.
l) A "model" is a \{possibly empty\} set of pairs \(<t, d>\), where t is one of the tokens "KEY" and "PTR", and d is an integer. Each area has a model, which may vary as a result of execution.
m) An area A is "reachable" if it is accessed by T or S \{1.6.b\}, if it is the C-locale \{1.6.c\}, if P \{1.5.2.c\} accesses the area, if the area contains a process descriptor whose token is Halted, or if some site in a reachable area A' is occupied by the key of A or by some pointer accessing A. An area is reachable only if it is reachable by virtue of the previous sentence.
n) "Model conformance" holds if the following three criteria are met:
(i) The model of the C-locale \{[1.6.c]\} is empty;
(ii) The C-locale contains no pointers or keys accessing a different locale;
(iii) For each reachable area, other than the C-locale, there is a one-to-one correspondence between the pointers \(k \cdot d\) of that area that point to a key (a pointer), and the elements \(<\text{KEY}, d>\) (<\text{PTR}, d>) of the model, with a possible exception for keys and pointers that are Nix or that access the C-locale.
{{If model conformance holds, this means that the sites of keys and pointers may be deduced from the model for purposes of garbage collection and area compaction. In the realization, a specialized representation of the sets that models are may be used. Because of the dynamic span determination of areas that are not locales, the collection of models that may play a role during execution is not finitely bounded. However, the corresponding models have a repetitive structure and may be represented in the realization by means of "hyper-models", by allowing, roughly, the abbreviation "and so on until the end of the area". The set of hyper-models that may play a role can then be kept finite and may be determined statically.}}.
o) The formula "\(m+d^{'}\)", where \(m\) is a model and \(d^{'}\) is an integer, stands for the model \(\{<t, d^{'}> | <t, d> \in m\}\).
1.4. Types
{{Types in the MIAM are akin to ALGOL 68 modes. They are static attributes that allow the realization of efficient treatment of objects. The main difference with ALGOL 68 is that a given type "describes" the lay-out of a contiguous segment of memory of fixed size. Pointers have a common type that does not contain the type of the object pointed to. {{There is no such thing as type checking in the MIAM.}} However, in operations manipulating objects through the access provided by a pointer, the type of that object is always statically known.
The "philosophy" behind the type model used in the MIAM is as follows. In actual computers, there are certain privileged "primitive types" in terms of whose semantics the machine instructions are described. An efficient realization must make use of this fact whenever reasonably possible. Now it may occur that not all addresses are equally suited for storing objects of a given primitive type. For example, it may happen that integers may be operated upon efficiently only if they are stored at addresses that are a multiple of four. If the design of the MIAM does not recognize this fact of technology, efficient realization of the MIAM is out of the question. Now the translation phase should not be bothered by parameters of the target hardware: the MIAM code turned out for a particular ALGOL 68 program should be identical. The solution chosen is to assign types to objects of the MIAM in such a way that the realization may choose addresses for MIAM pointers suitable for the objects pointed to. Whether this is indeed possible for a given contraption depends on how reasonable its adress restrictions are. The model developed here will not cater for the case where integers may not be stored at addresses that are one more than some prime number. The assumptions used are:
(i) The hardware addresses suitable for a given primitive type P are of the form \( a_p + n \times m_p \), where \( n \) runs through the integers. (It is not assumed, of course, that there is an infinite number of suitable addresses; see under (ii).) The quantity \( m_p \), the "modulus" of P, is at least one. Although this fact is not used, it is reasonable to choose \( a_p \) such that \( 0 \leq a_p < m_p \).
(ii) The hardware cells that will be occupied by a P object "at" address \( a_p + n \times m_p \) are \( l_p + n \times m_p \) through \( u_p + n \times m_p \), and if all of these cells physically exist, the adress is indeed suitable.
(iii) There exists a (least) common multiple M of all primitive type moduli (i.e., the set of moduli is finite).
In the realization, an area must always be "aligned" in such a way that the adress corresponding to its first cell, \( k \cdot 0 \), is a multiple of M. This ensures that a lay-out allowing efficient access to the objects in an area is possible.}
a) A "type" is a quintuple \(<t, a, l, u, m>\), where \(t\) is a token and \(a, l, u\) and \(m\) are integers. The "modulus" \(m\) is at least 1, and is a divisor of the "area modulus", denoted by "M". Moreover, \(l\) satisfies \(0 \leq l < m\), and the "span" of the type, \(u-l+1\), is at least 0.
\{If \(a = 1\) for all primitive types, this property is inherited for composite types if the formulae given below are used.\}\)
b) The token of a type is either an "atomic type token" or the token "STRUCT". The names of the atomic type tokens correspond, one to one, to the terminal productions of 'tok' \{2.1.h\}, after leading non-significant digits '0', if any, have been omitted from the constituent dec, if any \{see 2.1.m\}. If two types have the same atomic type token, then they are one and the same type.
\{Although each object has a type, there are no objects whose type has the token STRUCT. The latter kind of types may be used to model "composite objects", that, to the MIAM, and notably its realizer, exist only in the eye of the beholder (see also the remarks in section 1.5.a). The function of the atomic type tokens is to allow two types to be different, even if all four characteristic numbers are equal, since some hardware may require different instructions for different primitive types, even if the abstract meaning of the instructions is the same.\}\)
c) "Tjoin(\(t_1, t_2\)" and "Djoin(\(t_1, t_2\)"., where \(t_1\) and \(t_2\) are types, stand for a type \(t\) and an integer \(d\), respectively, satisfying:
\(i\) if \(p\) is a pointer, suitable for \(t\), then \(p\) is also suitable for \(t_1\) and \(p \cdot d\) is suitable for \(t_2\);
\(ii\) the sites appointed by \(p\) for \(t_1\) and by \(p \cdot d\) for \(t_2\) are disjoint, and are contained in the site appointed by \(p\) for \(t\);
\(iii\) the token of \(t\) is STRUCT;
\(iv\) if \(t_1\) is of the form \(<\text{STRUCT}, 0, 0, u, M>\), then \(t\) is of the form \(<\text{STRUCT}, 0, 0, v, M>\).
Tjoin and Djoin are \{\text{deterministic}\} functions of their arguments. Moreover, if Seq(n, t), where n is an integer $\geq 0$ and t is a type, is defined inductively by
- Seq(0, t) = Type[G] \{2.2.j\};
- Seq(n+1, t) = Tjoin(Seq(n, t), t);
then there exists a function Shift, mapping types to integers, such that the value of Djoin(Seq(n+1, t), t) is Djoin(Seq(0, t), t)+n*Shift(t).
\{{\text{Formulae computing t and d, given t}_1 = \langle k_1, a_1, l_1, u_1, m_1 \rangle \text{ and t}_2 = \langle k_2, a_2, l_2, u_2, m_2 \rangle, satisfying the requirements, are:}}\n
- let q_2 be (u_1+m_2-l_2) \times m_2 and
- let q_1 be (q_2-(u_1+1-l_2)) \times m_1;
- t = \langle \text{STRUCT, a}_1+q_1, l_1+q_1, u_2+q_2, \text{LCM (m}_1, m_2)\rangle, where LCM stands for the Lowest Common Multiple;
- d = (a_2+q_2)-(a_1+q_1).
The idea is that in the compound type the site of the t_2-object is shifted over q_2 cells to the right, being the least multiple of m_2 such that the shifted site is disjoint from the earliest possible site for the t_1-object. Next, the t_1-site is shifted to the right over q_1 cells, being the greatest multiple of m_1 that leaves the sites disjoint, in order to obtain a tight packing. This is not necessarily the tightest packing if the equality LCM(m_1, m_2) = max(m_1, m_2) does not hold for the moduli involved. More realistically, interchanging the order of the field sites might also give a tighter packing.
The "axiom" defined by means of the auxiliary function Seq means that sites for a sequence of objects of the same type are allocated equidistantly.}}
d) "Shift(t)"\(^\text{\textdagger}\), where t is a type, stands for the integer Djoin(Seq(n+1, t), t)-Djoin(Seq(n, t), t), where Seq is the function introduced in the previous section \{\text{and Shift is the same function whose existence was postulated there}\}.
1.5. Objects
a) An "object" is a "plain object" {{1.5.1.a}}, a key or a pointer, or a "descriptor" {{1.5.2.a, 1.5.2.b}}. Each object has a type and a scope. The token of the type of a key (a pointer) is "KEY" ("PTR").
The translator may model "composite objects" by composing them of a sequence of other objects. The MIAM proper does not "recognize" the existence of composite objects -- other than parallel action descriptors, whose internal structure, however, is inaccessible --, but provides all necessary equipment for the modelling, such as types for composite objects, being a function of the types of the components, determined with the "Tjoin" function. The site occupied by a "copy" of such a composite object is then occupied by a sequence of copies of its components, possibly leaving some cells unused in between. The "scope" of a composite objects is the largest of the scopes of its components.
1.5.1. Plain objects
a) A "plain object" is an integer, an "answer" (i.e., one of the tokens "Yes" and "No"), a "label" {{1.7.1.c}} or some "other plain object" {{e.g., a character or a real number}}. The token of the type of an integer (an answer, a label) is "INTO" ("ANS", "LAB"). The names of the tokens of the types of other plain objects correspond, one to one, to the terminal productions of 'tok' obtained by adding productions for it by virtue of 2.1.i. The scope of a plain object is 0.
The definition and treatment of other plain objects are left open in this definition of the MIAM. For a contract between translation and realization, these have to be filled in, of course.
}}
(A formula computing $\text{Shift}(<k, a, l, u, m>)$, if the formulae for Tjoin and Djoin given above are used, is $(u + m - 1) + m = m$. This function is useful for translating selection on multiple values.)}
1.5.2. Descriptors
[[Parallel actions may be translated by means of parallel action and process descriptors. Starting from some primal process descriptor, a tree is descended of currently active processes. In the model given below, the branches of the tree correspond to pointers pointing in the direction from leaves to root. However, no explicit connection is given between a process descriptor and any parallel action descriptor created by the corresponding process (by the action of a SPAWN-instruction). A data structure realizing the tree must contain supplementary pointers to connect the tree; otherwise, the "If there exists" in the description of "Search Process" (1.7.2.c) could not be effected in a reasonable way. Also, the determination of "Spawner(p)" [[1.5.2.d]] requires implicit pointers pointing upwards in the tree.]]
a) A "parallel action descriptor" is an object, composed of a sequence of "process descriptors" [[1.5.2.b]] and a "parent" pointer [[which, if it is not Nix, points to a process descriptor <Spawned, 1>]]. The site occupied by a copy of a parallel action descriptor is occupied by a sequence of copies of its components, possibly leaving some cells unused in between. The scope of a parallel action descriptor is the largest of the scopes of its components. A parallel action descriptor with n process descriptors has a type whose token has a name of the form "PARdec", where Val(dec) [[2.2.b]] is n.
b) A "process descriptor" is an object, composed of a token and possibly other objects; it is of one of the following four forms:
- <Running>;
- <Spawned, 1>, where 1 is a label [[for continuation after all spawned processes have reached completion]];
- <Halted, sp, k_T, k_S, l>, where sp is a [[semaphore]] pointer pointing to an integer, k_T and k_S are keys and 1 is a label [[for resumption if the condition that caused the halting no longer applies]];
- <Complete>.
The scope of a process descriptor is the largest of the scopes of its components, where the tokens are assumed to have scope 0.
c) There is a register "P" holding a pointer which, if it is not Nix, points to a process descriptor {{the (unique) "<Running>" process descriptor}}.
d) "Spawner(p)", where p is a pointer pointing to a process descriptor, is the parallel action descriptor, pointed to by the pointer q, such that the site appointed by p is contained in the site appointed by q.
{{This definition is given in terms of pointers, since different parallel action descriptors might contain identical process descriptors.}}
1.6. Locales
a) An area may be a "locale".
{{Locales are created by an EST-instruction; other areas are created by a GEN-instruction. Locales are chained by a dynamic and a static (lexicographic) chain. Parallel action descriptors may built a tree form from these, otherwise linear, chains.}}
b) There exist two registers T and S that may hold keys of locales, that are then known as the "T-locale" and the "S-locale", respectively.
c) The "C-locale" is a {{standard}} locale, existing without explicit creation, whose scope is 0 and whose span is sufficiently large {{if realization permits}} to accomodate the static action prescribed by all of the CFILL-instructions {{3.2.f}}. "C" stands for the key of the C-locale. It is required that no pointers or keys accessing areas other than the C-locale are made to occupy sites contained in the C-locale.
{{The C-locale is the only locale whose scope is 0, and it is also the only area that exists without creation, not counting some fictitious locale(s) facilitating the semantic description.}}
1.7. Actions
1.7.1. The program
a) The program consists of a sequence of "instructions". The "execution" of the program consists of the execution of the instructions, one by one, starting with the first instruction, and ending, if the program is normally completed, with the last instruction. The execution of each instruction determines a successor, which is, unless otherwise specified, the next instruction in {{the sequence which is}} the program, or it results in "abortion" with an "error code" (an integer), whereupon no further instructions are executed.
b) For some instructions the execution is empty, apart from determining the next instruction as successor {{e.g., a LABEL instruction}}. Such instructions, may, however, influence the meaning of other instructions. This influence is execution-independent and must, therefore, be determined statically by performing once, in the textual order, the "static actions" prescribed for these instructions.
{{This holds, especially, for the CFILL-instructions and for the EST-FIN pairs. If the realization, in a second pass through the program to generate concrete code for the dynamic actions, should re-perform the static actions, the meaning remains the same. For example, the "meaning" of 4 in the COPY-instruction in
JOIN, A, INT, 4;
COPY, INTO, 666, &T4;
JOIN, 4, INT, 4;
is that Offset[4], that has been set in the first instruction, even though the following setting of Offset[4] is allowed and has a well-defined effect.}}
c) A "label" is the "valuation" {{2.2.a}} of a "lab" {{2.1.q}}. The scope of a label is 0, and its type is LAB.
d) There exists a special class of "abortive" labels, which have an error code.
e) "LABEL" \{[3.4.c]\} and "DOWN" \{[3.5.d]\} instructions are "labelled" with the valuation of their first argument \{[which is execution independent]\}. It is required that no two different instructions be labelled with the same label.
f) If, in this description, some condition is said to be "required", this means that the MIAM is not designed to be able to cope with the situation arising if the condition is not fulfilled.
\{[It is certainly not the intention that a realization of the MIAM should check the requirements. Rather, the translation should generate a program whose execution cannot violate them.]}\}
1.7.2. Auxiliary actions
a) "Newkey(m, s, c)", where m is a model, s is a span and c is a scope, stands for the key yielded by the following action:
• it is required that model conformance holds \{[1.3.n]\};
• the action yields the key of a newly created area with model m, span s and scope c.
\{[It may be helpful to know that this action is only prescribed by the instructions EST \{[3.2.a]\} and GEN \{[3.3.a]\}, and that prior to its invocation the execution of these instructions does not call for actions that might influence the occupancy of any area.]}\}
b) "Goto(l)", where l is a label, stands for the action whereby the instruction labelled with the label l, if any, is determined as successor to the instruction currently executed; otherwise, it is required that l be abortive and the action results in abortion \{[1.7.1.a]\} with the error code of l.
c) "Search Process" stands for the following action:
If there exists a pointer q, contained in a reachable area {{1.3.m}}, pointing to a process descriptor <Halted, sp, k_T, k_S, 1>, such that sp points to a nonnegative integer,
then
• <Running> = * q;
• P is set to q;
• T is set to k_T and S is set to k_S;
• Goto(1);
otherwise,
• Goto(a), where a is an abortive label with error code Deadlock.
d) "Discard Par(k)", where k is the key of a locale, stands for the following action:
If P and k access the same locale,
then
• let Spawner(P) {{1.5.2.d}} be the parallel action descriptor {{1.5.2.a}}
<pp, pd_1, ..., pd_n>;
• P is set to pp;
• <Running> = * P;
• <Nix, <Complete>, ..., <Complete>> = * s {{which, in a realization of
the MIAM, should be a dummy action}};
• Discard Par(k) {{again}};
otherwise,
• if k is not T, Discard Par(*k·u), where u is Offset[U] {{2.2.n}}.
2. ARGUMENTS
2.0. Notation
In the next section, a syntax definition method is used that is a variant of BNF. Non-terminal symbols are a sequence of lower case letters. A colon separates the left hand side of a rule from the right-hand side, and the alternatives are separated by a bar ("|"). All other marks are terminal symbols and stand for themselves. Blank spaces are not significant (but are
inserted in the syntax rules in such a way that they help to increase legibility in the terminal productions if treated as terminal symbols).}
2.1. Syntax
{The following transcriptions may be helpful:
& - pointer to N - No
* - follow pointer S - S-locale
A - Around-chain field T - T-locale
C - C-locale U - Upon-chain field
E - Established locale X - nIX
G - Generated area Y - Yes
H - Shift
I - Indirect
ans-answer
arg-argument
dec-decimal
ins-inspectable (only)
int-integer
jtp-joined type
lab-label
lev-static level
lit-literal
loc-locale
off-offset
ptr-pointer
rec-recipient pointer
res-resident (copy)
sin-signed integer
tok-token of type
typ-type
}
a) arg: lit | ins | rec
b) lit: sin | Y | N | Ldec | Adec | X | Htyp
c) Other productions for 'lit' may be added, together with rules for their valuation {[(2.2.a)]. Presumably, these other productions correspond to denotations for the ALGOL 68 modes mirrored by INTsin with Val(sin) ≠ 0 and by additional productions for 'tok'; see 2.1.1.)
d) sin: dec | -dec
e) ins: #Coff | #Toff | #Soff | #Ioff₁·off₂
f) rec: Coff | &Coff | Toff | &Toff | Soff | &Soff | Ioff₁·off₂
g) typ: tok | jtp
h) tok: G | E | KEY | PTR | ANS | INTsin | LAB | PARdec
i) Other productions for 'tok' may be added. {{Presumably, other produc-
tions for 'tok' are 'CHAR', 'REALsin', 'BITSin' and 'BYTESin', and '
CHAN-
NEL', 'BOOK' and 'BUF' if the approach from VAN VLIET[2] is taken for the
translation.}}
j) jtp: U | A | dec
k) lev: dec
l) off: jtp | off+jtp
m) dec: a nonempty sequence of decimal digits
A dec may have leading digits 0; however, 0dec1 is considered entirely
equivalent to dec1 {{so L007 and L7, e.g., are one and the same lab}}.
n) res: ins | Coff | Toff | Soff
o) int: sin | Htyp | res
p) ans: Y | N | res
q) lab: Ldec | Adec | res
r) ptr: X | ins | rec
{{Auxiliary definition}}
s) loc: C | T | S
2.2. Valuation
a) The "valuation" of an arg determines an object {{generally during execu-
tion}}; it is denoted by Val(arg). A typ t determines statically a type,
denoted by Type[t], and a model, denoted by Model[t]. Moreover, if t is a
jtp, it determines an integer {{an "offset"}}, denoted by Offset[t]. In the
static or dynamic requirements and actions, Type[t], Offset[t] and
Model[t], where t is a jtp, have a meaning only if they have been set by
the static action of a textually preceding instruction which has not been
invalidated by a textually intervening instruction, and they have the mean-
ing as set by the textually last such instruction.
b) The valuations of the lits are \{execution independent and are\} deter-
mined as follows:
- \text{Val}(\text{dec}) \text{ is the integer whose decimal representation is } \text{dec};
- \text{Val}(-\text{dec}) = -\text{Val}(\text{dec});
- \text{Val}(\text{Y}) \text{ is the answer Yes;}
- \text{Val}(\text{N}) \text{ is the answer No;}
- \text{Val}(\text{Ldec}) \text{ is the label } "\text{Ldec}"; \text{it is required that there be exactly one}
\text{instruction labelled with } "\text{Ldec}";
- \text{Val}(\text{Adec}) \text{ is an abortive label with error code } \text{Val}(\text{dec});
- \text{Val}(\text{X}) \text{ is the pointer } \text{Nix};
- \text{Val}(\text{Htyp}) \text{ is } \text{Shift}(\text{Type}[\text{typ}]).
c) \text{Offset}[\text{off}+\text{jtp}] = \text{Offset}[\text{off}]-\text{Offset}[\text{jtp}].
d) \text{Val}(\&\text{Coff}) \text{ is the pointer } C \cdot \text{Offset}[\text{off}].
e) \text{Val}(\&\text{Toff}) \text{ is the pointer } T \cdot \text{Offset}[\text{off}].
f) \text{Val}(\&\text{Soff}) \text{ is the pointer } S \cdot \text{Offset}[\text{off}].
g) \text{Val}(\text{locoff}) = *\text{Val}(\&\text{locoff}) \{e.g., } \text{Val}(\text{T4}) = *\text{Val}(\&\text{T4}) =
*\text{T} \cdot \text{Offset}[\text{4}].\}
h) \text{Val}(\text{off}_1,\text{off}_2) \text{ is the pointer } p \cdot \text{Offset}[\text{off}_2], \text{where } p \text{ is } \text{Val}(\text{Toff}_1);
\text{it is required that } p \text{ be a pointer, other than } \text{Nix.}
i) \text{Val}(\&\text{arg}) = *\text{Val}(\text{arg}) \{; \text{it is required that } \text{Val}(\text{arg}) \text{ be a pointer,}
\text{other than } \text{Nix}\}.
j) \text{Type}[\text{G}] \text{ is } \langle \text{STRUCT}, 0, 0, -1, 1 \rangle \text{ and } \text{Model}[\text{G}] \text{ is the empty set.}
k) \text{Type}[\text{E}] \text{ is } \langle \text{STRUCT}, 0, 0, -1, \text{M} \rangle, \text{where } \text{M} \text{ is the area modulus } \{1.4.a\},
\text{and } \text{Model}[\text{E}] \text{ is the empty set.}
l) \text{Type}[\text{tok}], \text{where tok is not } \text{G or E, is the type whose token is named tok.}
m) Model[tok] is {<tok, 0>} if tok is KEY or PTR, and the empty model {} otherwise.
n) Type[U] is Tjoin(Type[E], Type[KEY]), Offset[U] is Djoin(Type[E], Type[KEY]), and Model[U] is {<KEY, Offset[U]>}.
o) Type[A] is Tjoin(Type[U], Type[KEY]), Offset[A] is Djoin(Type[U], Type[KEY]), and Model[A] is {<KEY, Offset[U]>, <KEY, Offset[A]>}.
{{The effect is the same as would be obtained for decs u and a by
JOIN, E, KEY, u;
JOIN, u, KEY, a;}}
p) Type[dec], Offset[dec] and Model[dec] are defined if set by a JOIN, EST, MAX or FIN-instruction {{3.1.a, 3.2.a, 3.1.c, 3.2.b}}.
3. THE INSTRUCTIONS
3.0. Notation
In each instruction format given below, a lower-case letter, possibly adorned with a subscript or an apostrophe, stands for a non-terminal symbol for which a production rule is given in the lines following the instruction format. In the requirements and actions given for the instruction, they stand for the terminal productions by which they are replaced in the actual instruction. Production rules for different non-terminal symbols that have a common right-hand side may be replaced by one rule whose left-hand side consists of a list of the original left-hand sides.
3.1. Type instructions
a) JOIN, s, t, u;
s, t: typ
u: dec {{type}}
Static action:
- Type[u] is set to Tjoin(Type[s], Type[t]), Offset[u] is set to Djoin(Type[s], Type[t]), and Model[u] is set to the union of Model[s] and Model[t]+Offset[u] \{1.3.c\}.
\{If s and t accommodate the ALGOL 68 modes SS and TT, then u will accommodate STRUCT(SS f1, TT f2). If the argument Toff gives access to an object of the composite type u, then Toff gives access also, in a context where an object of type s is implied, to the first field, and Toff+u gives access to the second field. A structured mode with more than two fields, e.g., STRUCT(SS f1, TT f2, UU f3), may be handled by treating it as STRUCT(STRUCT(SS f1, TT f2) fx, UU f3) (or as STRUCT(SS f1, STRUCT(TT f2, UU f3) fy), which does not necessarily give the same layout).
The type G is a dummy that is useful to make uniform translation schemes in which the first (or the last) field does not have to be translated in a special way; Type[G] is defined as the type of a virtual object of zero span that can be accommodated at any site. G is also useful to create types that are equivalent to already given types, except that the token is STRUCT, as is required by the MAX-instruction.
The type E forces alignment in the realization. It is especially useful for allocating sites in a locale (in translating an establishing-clause). Consider, for example, BEGIN SS x1; TT x2; UU x3; ... END. This can be handled as BEGIN STRUCT(SS f1, TT f2, UU f3) xx; ... END, but then the translation for accessing x1, say, as f1 of xx, depends on the subsequent identifier-declarations. If, however, a dummy declaration "EE dummy" is assumed immediately following the BEGIN, where EE is treated as a mode accommodated by Type[E], the access for x1 as computed by the above scheme becomes independent of the sequel. Actually, the EST-instruction introduces not only EE alignment automatically, but also, for convenience, two explicit fields for keys, as in BEGIN EE dummy; KEY upon, around; ... END.\]
b) SJOIN, s, t, u;
s, t: typ
u: dec {type}
Static action:
- The same static action is performed as would be performed by JOIN, s, t, u;.
Dynamic Requirement:
- It is required that, for any non-empty site appointed by a pointer that is the result of valuating, during execution, an argument of the form \&loc\text{jtp}_{1}+\ldots+\text{jtp}_{n-1}+u, where the value of Offset[u] is defined by virtue of an SJOIN-instruction, neither that site nor any part or component thereof, be appointed by any pointer contained in a reachable area [{1.3.m}].
(To grasp the usefulness of the requirement, it should be stressed that requirements of the MIAM, being clauses from a contract between the translator and the realizer, correspond to promises on the part of the first party. Since it is always possible to use a JOIN-instruction, the self-inflicted requirement of an SJOIN-instruction is a promise by the translator that no "alias" pointers will be set up appointing sites described with the offset u. This makes it possible for the realization to keep the corresponding objects in hardware registers, if this is desirable for optimization purposes, without global data-flow analysis to check the safety. In the general case, the translator can only make the promise after some degree of global analysis of the source text. This is not necessary, however, for the code emitted for anonymous counters needed to translate, e.g., various actions on multiple values, in which case the intended optimization may be quite profitable.)
c) MAX, s, t;
s: jtp {{type}}
t: jtp {{type}}
Static requirement:
- Type[s] is of the form <STRUCT, 0, 0, u, M>, and Type[t] is of the form <STRUCT, 0, 0, v, M>.
Static action:
- Type[t] is set to <STRUCT, 0, 0, max (u, v), M>;
- any statically preceding settings of Offset[t] and Model[t] become invalid.
[[The MAX-instruction is useful for translating UNITED modes. By using the typ E, the required zeros and M can be forced. This is hardly a restriction, since each united object has to be allocated an area of its own in order to be able to set the model properly, and since areas have to be aligned anyway in the realization. This argumentation is invalid in cases like UNION(INT, REAL), where both variants give rise to an empty model, so some cells may remain unused because of the static requirement. Still, there is a good reason for always allocating a separate area for "united" objects: they may then be copied by simply copying the pointer yielded by the GEN-instruction. That this is the case does not follow from any particular property of the MIAM, but from the Semantics of ALGOL 68 itself.
Another important application of MAX-instructions is for accommodating sites for anonymous intermediate yields in a locale: the "working stack". The type of the locale may be treated as the union of all types for all intermediate stages the site lay out of the locale may be in.]]
3.2. Instructions concerned with locales
[[The following instructions use a "static level" with a "type number". As follows from the static requirements, this is redundant (but possibly helpful) information; however, a nesting of EST and FIN-instructions is thereby enforced.]]
a) EST, l, s;
l: lev {[static level]}
s: dec {[type]}
Static requirement:
- Val(l) is one more than the current static level.
Static action:
- the static level is set to Val(l);
- Type[s] is set to Type[A] and Model[s] is set to Model[A];
- the type number of the static level is set to Val(s).
Dynamic action:
- let c be the scope of the T-locale;
- let k be Newkey(Model[A], os, c+1) {{1.7.2.a}}, where os is the Offset[s] statically set at the corresponding (i.e., textually first following) instruction "FIN, l, s;" {{with the same l and s}};
- S is set to T;
- T is set to k;
- S =* T.Offset[U] and S =* T.Offset[A].
{[This instruction is typically the first to be emitted in the translation of an establishing-clause. If the "upon" and the "around" locale do not coincide, one of the following instructions in the translation of an establishing clause will, presumably, be a SETS-instruction.]
b) FIN, l, s;
l: lev {{static level}}
s: dec {{locale type}}
Static requirements:
- Val(1) is the current static level, and Val(s) is the corresponding type number;
- the T-locale does not contain a parallel action descriptor, not all of whose process descriptors are <Complete>;
- If Val(1) is 0, this instruction is the last instruction of the program.
Static action:
- Offset[s] is set to Djoin(Type[s], Type[G]) {{or, maybe, to obtain a multiple of M, to Shift(Type[s]) in the actual realization}};
- the static level is decreased by one.
Dynamic requirement:
- The dynamically last preceding EST-instruction not yet dynamically matched by a corresponding FIN-instruction is the statically corresponding EST-instruction.
Dynamic action:
- T is set to T.Offset[U];
- S is set to T.Offset[A] {{in which T is the newly set T}}.
{{Typically, some MAX-instructions may have intervened between the EST and FIN-instruction to set Type[s].}}
c) MOD, m;
m: jtp {{model}}
Dynamic action:
- the model of the T-locale is made to be Model[m].
{{This instruction is used to ensure that Model conformance holds prior to the execution of an EST- or GEN-instruction. It is not necessary, on translation, to issue a MOD-instruction for each change in the occupancy by keys or pointers; if no EST- or GEN-instruction may intervene between the execution of two MOD-instructions, the first one was superfluous. Care should be taken that all sites of keys and pointers indicated in the model have indeed been filled before an EST- or GEN-instruction is executed; for pointers this may be done by using Nix.}}
d) KEEP, p;
p: ptr
Requirements:
Let A be the area accessed by Val(p).
- A is not a locale {{i.e., A is created by a GEN-instruction}};
- The scope of the T-locale is greater than 0;
- The scope of A is the scope of the T-locale;
- the scope of any key or pointer, a copy of which occupies a site in A, is most the scope of the S-locale.
Dynamic action:
• the scope of A is made to be the scope of the S-locale.
{{The KEEP-instruction exists only for reasons of efficiency. Without this instruction, the multiple value yielded by the inner closed-clause in, e.g.,
BEGIN [] REAL x = ([large] REAL xx; ...; xx); ... END
would have to be copied to a newly created area since its scope would be too large (i.e., numerically).}}
e) SETS, a;
a: arg {{key of locale}}
Dynamic action:
• S is set to Val(a).
f) CFILL, t, l, u;
t: tok
l: lit {{object of type Type[t]}}
u: off {{for object of type Type[t]}}
Static requirements:
• t is not G, E, KEY or some PARdec {{for which, anyway, no lits can be given}};
• no textually preceding CFILL-instruction has caused a copy to occupy the site appointed by C.Offset[u], nor any part or component thereof.
Static action:
• Val(l) =* C.Offset[u].
{{The filling of the C-locale is performed before execution starts; so an ins of the form *Coff may be used before the corresponding CFILL-instruction. For this to be meaningful, however, the (static) meaning of Offset[off] has to be the same in both instructions.}}
3.3. Instructions concerned with pointers
a) GEN, t, s, t', a, r;
t, t': typ {{possibly G}}
s: int {{number of t' elements}}
a: arg {{key}}
r: rec {{for pointer}}
Dynamic requirement:
• let N be Val(s);
• N ≥ 0.
Dynamic action:
• let c be the scope of Val(a);
• let t₀ be Type[G] and let m₀ be Model[G] {{i.e., empty}};
For i from 1 to N:
• let tᵢ be Tjoin(tᵢ₋₁, Type[t']) and let mᵢ be the union of mᵢ₋₁ and
Model[t'] + Djoin(tᵢ₋₁, Type[t']);
• let u₀ be Tjoin(Type[E], Type[t]) and let n₀ be
Model[t] + Djoin(Type[E], Type[t]);
• let uᵢ be Tjoin(u₀, tₙ) and let nᵢ be the union of n₀ and
mₙ + Djoin(u₀, tₙ);
• let k be Newkey(n₁, Djoin(u₁, Type[G]), c) {{1.7.2.a}};
• k·Djoin(Type[E], Type[t]) ≠ Val(r).
{{The computations are the same as would have been performed in
JOIN, G, t', t₁;
JOIN, t₁, t', t₂;
...
JOIN, t_N₋₁, t', tₙ;
JOIN, E, t, u₀;
JOIN, u₀, tₙ, u₁;
which, however, cannot be performed statically if N is not known statically. See also the remarks about Model conformance in 3.2.c.}}
b) COPY, t, a, r;
t: tok
a: arg {{object of type Type[t]}}
r: rec {{for object of type Type[t]}}
Static requirement:
• t is not G, E or some PARdec.
Dynamic requirement:
• if the valuation Val(a), according to Section 2.2.a, is some #Val(b),
then either Val(b) = Val(r), or the sites appointed by Val(b) and by Val(r)
have no cells in common.
Dynamic action:
• Val(a) ≠ Val(r).
{{The copying of a composite object (see 1.5.a) has to be written out in
terms of copying its primitive components. There is no way in which a
parallel action descriptor can be copied.}}
c) DOT, t, p, d, r;
t: jtp
p: ptr
d: int
r: rec {{for pointer to object of type Type[t]}}
Dynamic action:
• Val(p)·Val(d) ≠ Val(r).
{{This action is generally only meaningful if p is derived from the result
of a GEN-instruction, and d is the result of multiplying an integer with
Val(Ht'), where t' is the fourth argument of that GEN-instruction. Because
of the commutativity of addition, field-selection on multiple values (e.g.,
given some [] COMPL zz, re OF zz), can easily be translated.}}
d) SCOPE, p, r;
p: ptr
r: rec {{for integer}}
Dynamic action:
• let c be the scope of Val(p);
• c = Val(r).
e) IFIS, p, q, l;
p, q: ptr
l: lab
Dynamic action:
If Val(p) = Val(q)
then
• Goto(Val(l));
otherwise,
• no action.
f) IFISNT, p, q, l;
p, q: ptr
l: lab
Dynamic action:
If Val(p) = Val(q)
then
• no action;
otherwise,
• Goto(Val(l)).
3.4. Instructions concerned with control flow
a) INIT;
Static requirement:
• the instruction is the textually first instruction of the program.
Dynamic action:
• the static level is set to -1, and T and S are set to the key of the C-locale;
• P is set to a pointer to a process descriptor, appointing a site in a fictitious locale of scope 0;
• <Running> =* P.
b) IMAT, l;
1: dec {{line number}}.
Action: none {{but presumably this may be put to some diagnostic use}}.
c) LABEL, l;
1: Ldec
Static action:
• Val(l) is made to label the instruction.
d) GOTO, l;
1: lab
Dynamic action:
• Goto(Val(l)).
e) JUMP, v, l;
1: Dec
Dynamic action:
• let k be T;
• T is set to Val(v);
• Discard Par(k);
• Goto(Val(l)).
f) UNL, b, l;
1: Ldec
Dynamic action:
If Val(b) = Yes
then
• no action;
otherwise,
• Goto(Val(1)).
g) CASE, i, c, l, l_0, l_1, \ldots, l_n;
i: int
c: dec
l, l_0, \ldots: Ldec
Static requirement:
• Val(c) is n.
Dynamic action:
If 0 \leq Val(i) \leq n
then
• Goto(Val(l_{\text{Val}(i)}));
otherwise,
• Goto(Val(1)).
3.5. Instructions concerned with parallel action descriptors
a) SPAWN, c, l_2, \ldots, l_n, l_{n+1}, r;
c: dec
l_2, \ldots, l_{n+1}: Ldec
r: rec \{\text{for a parallel action descriptor of type}
\text{Type[PARn]}}\}
Static requirement:
• Val(c) is n.
Dynamic action:
• let s be a parallel action descriptor \langle pp, p_1, \ldots, p_n \rangle whose components
are determined as follows:
• pp is P;
• \( p_1 \) is \(<\text{Running}>\);
• \( p_i, i = 2, \ldots, n, \) is \(<\text{Halted, sp, T, S, Val(1)}>, \) where \( \text{sp} \) is a
pointer appointing a site in a fictitious locale of scope 0, occupied by
a copy of a fixed, positive integer;
• \(<\text{Spawned, Val(1_{n+1})}>) = \# P;\)
• \( s = \# \text{Val(r)}; \)
• \( P \) is set to a pointer to \( p_1. \)
b) COMPLETE;
Dynamic requirement:
• \( \# P \) is \(<\text{Spawned, l} \) for some label \( l. \)
Dynamic action:
• let Spawner(P) \(\{1.5.2.\text{d}\} \) be \(<\text{pp, p}_1, \ldots, p_n>\);
• \(<\text{Complete}> = \# P;\)
If for some \( i, 1 \leq i \leq n, p_i \) is not \(<\text{Complete}>, \) then
• Search Process \(\{1.7.2.\text{c}\};\)
otherwise,
• \(<\text{Running}> = \# \text{pp}; \)
• \( P \) is set to \( \text{pp}; \)
• Goto(1).
c) UP, r;
\( r: \text{rec } \{\text{for integer}\} \)
Dynamic action:
• \( \# \text{Val(r)}+1 = \# \text{Val(r)}. \)
d) DOWN, l, r;
\( l: \text{Ldec} \)
\( r: \text{rec } \{\text{for integer}\} \)
Static action:
• \( \text{Val(l)} \) is made to label the instruction.
Dynamic action:
If VAL(r) \geq 1
then
* VAL(r)-1 = VAL(r);
otherwise,
* <Halted, VAL(r), T, S, VAL(l)> = P;
* Search Process {{1.7.2.c}}.
3.6. Instructions concerned with simple arithmetic
a) ADD, i, j, r;
i, j: int
r: rec {{for integer}}
Dynamic action:
* VAL(i)+VAL(j) = VAL(r).
b) SUB, i, j, r;
i, j: int
r: rec {{for integer}}
Dynamic action:
* VAL(i)-VAL(j) = VAL(r).
c) NEG, i, r;
i: int
r: rec {{for integer}}
Dynamic action:
* -VAL(i) = VAL(r).
d) MUL, i, j, r;
i: int
j: int
r: rec {{for integer}}
Dynamic action:
• Val(i) # Val(j) => Val(r).
3.7. Instructions concerned with simple comparisons
a) IF c, i, j, l;
c: LT | LE | EQ | NE | GE | GT
i, j: int
l: lab
Dynamic action:
• let \( \& \) be \(<, =, \neq, \geq, \rangle\) if \( c \) is LT (LE, EQ, NE, GE, GT);
If Val(i) \( \& \) Val(j)
then
• Goto(Val(l));
otherwise,
• no action.
REFERENCES
|
{"Source-Url": "http://www.kestrel.edu/home/people/meertens/publications/papers/Definition_of_an_Abstract_ALGOL_68_Machine.pdf", "len_cl100k_base": 13514, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 234482, "total-output-tokens": 15610, "length": "2e13", "weborganizer": {"__label__adult": 0.0003268718719482422, "__label__art_design": 0.0006775856018066406, "__label__crime_law": 0.0003552436828613281, "__label__education_jobs": 0.0014362335205078125, "__label__entertainment": 0.00010865926742553712, "__label__fashion_beauty": 0.0001817941665649414, "__label__finance_business": 0.0003771781921386719, "__label__food_dining": 0.0004892349243164062, "__label__games": 0.0010271072387695312, "__label__hardware": 0.00264739990234375, "__label__health": 0.0005345344543457031, "__label__history": 0.00052642822265625, "__label__home_hobbies": 0.00019180774688720703, "__label__industrial": 0.0010232925415039062, "__label__literature": 0.0004448890686035156, "__label__politics": 0.00041556358337402344, "__label__religion": 0.0008172988891601562, "__label__science_tech": 0.256103515625, "__label__social_life": 9.423494338989258e-05, "__label__software": 0.00942230224609375, "__label__software_dev": 0.72119140625, "__label__sports_fitness": 0.0003612041473388672, "__label__transportation": 0.0007891654968261719, "__label__travel": 0.00021779537200927737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49168, 0.01519]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49168, 0.51339]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49168, 0.8742]], "google_gemma-3-12b-it_contains_pii": [[0, 184, false], [184, 532, null], [532, 1233, null], [1233, 3380, null], [3380, 4011, null], [4011, 5880, null], [5880, 7609, null], [7609, 9535, null], [9535, 11495, null], [11495, 13792, null], [13792, 15785, null], [15785, 17645, null], [17645, 19461, null], [19461, 21504, null], [21504, 23057, null], [23057, 24661, null], [24661, 26237, null], [26237, 27543, null], [27543, 28696, null], [28696, 30019, null], [30019, 32201, null], [32201, 33466, null], [33466, 35484, null], [35484, 37144, null], [37144, 38788, null], [38788, 40116, null], [40116, 41515, null], [41515, 42649, null], [42649, 43669, null], [43669, 44772, null], [44772, 45289, null], [45289, 45887, null], [45887, 46599, null], [46599, 47702, null], [47702, 48257, null], [48257, 49168, null]], "google_gemma-3-12b-it_is_public_document": [[0, 184, true], [184, 532, null], [532, 1233, null], [1233, 3380, null], [3380, 4011, null], [4011, 5880, null], [5880, 7609, null], [7609, 9535, null], [9535, 11495, null], [11495, 13792, null], [13792, 15785, null], [15785, 17645, null], [17645, 19461, null], [19461, 21504, null], [21504, 23057, null], [23057, 24661, null], [24661, 26237, null], [26237, 27543, null], [27543, 28696, null], [28696, 30019, null], [30019, 32201, null], [32201, 33466, null], [33466, 35484, null], [35484, 37144, null], [37144, 38788, null], [38788, 40116, null], [40116, 41515, null], [41515, 42649, null], [42649, 43669, null], [43669, 44772, null], [44772, 45289, null], [45289, 45887, null], [45887, 46599, null], [46599, 47702, null], [47702, 48257, null], [48257, 49168, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49168, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49168, null]], "pdf_page_numbers": [[0, 184, 1], [184, 532, 2], [532, 1233, 3], [1233, 3380, 4], [3380, 4011, 5], [4011, 5880, 6], [5880, 7609, 7], [7609, 9535, 8], [9535, 11495, 9], [11495, 13792, 10], [13792, 15785, 11], [15785, 17645, 12], [17645, 19461, 13], [19461, 21504, 14], [21504, 23057, 15], [23057, 24661, 16], [24661, 26237, 17], [26237, 27543, 18], [27543, 28696, 19], [28696, 30019, 20], [30019, 32201, 21], [32201, 33466, 22], [33466, 35484, 23], [35484, 37144, 24], [37144, 38788, 25], [38788, 40116, 26], [40116, 41515, 27], [41515, 42649, 28], [42649, 43669, 29], [43669, 44772, 30], [44772, 45289, 31], [45289, 45887, 32], [45887, 46599, 33], [46599, 47702, 34], [47702, 48257, 35], [48257, 49168, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49168, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
24de4bc9ef109520103cb186f95f3093e5abe434
|
The Missing Links: Bugs and Bug-fix Commits
Adrian Bachmann¹, Christian Bird², Foyzur Rahman², Premkumar Devanbu² and Abraham Bernstein¹
¹Department of Informatics, University of Zurich, Switzerland
²Computer Science Department, University of California, Davis, USA
{bachmann,bernstein}@ifi.uzh.ch
{cabird,mfrahman,ptdevanbu}@ucdavis.edu
ABSTRACT
Empirical studies of software defects rely on links between bug databases and program code repositories. This linkage is typically based on bug-fixes identified in developer-entered commit logs. Unfortunately, developers do not always report which commits perform bug-fixes. Prior work suggests that such links can be a biased sample of the entire population of fixed bugs. The validity of statistical hypotheses-testing based on linked data could well be affected by bias. Given the wide use of linked defect data, it is vital to gauge the nature and extent of the bias, and try to develop testable theories and models of the bias. To do this, we must establish ground truth: manually analyze a complete version history corpus, and nail down those commits that fix defects, and those that do not. This is a difficult task, requiring an expert to compare versions, analyze changes, find related bugs in the bug database, reverse-engineer missing links, and finally record their work for use later. This effort must be repeated for hundreds of commits to obtain a useful sample of reported and unreported bug-fix commits. We make several contributions. First, we present Linkster, a tool to facilitate link reverse-engineering. Second, we evaluate this tool, engaging a core developer of the Apache HTTP web server project to exhaustively annotate 493 commits that occurred during a six week period. Finally, we analyze this comprehensive data set, showing that there are serious and consequential problems in the data.
Categories and Subject Descriptors
D.2.8 [Software Engineering]: Metrics—Product Metrics, Process Metrics
General Terms
Experimentation; Measurement; Verification
Keywords
case study; apache; bias; tool; manual annotation
1. INTRODUCTION
Software process data, especially bug reports and commit logs, are widely used in software engineering research. The integration of these two provides valuable information on the history and evolution of a software project. It is used, e.g., to predict the number and locale of bugs in future software releases (e.g., 27 31 17 3). The two data sources are normally integrated by scanning through the version control log messages for potential bug report numbers; conscientious developers enter this information when they check-in bug fixes (e.g., see 14). We used similar techniques in our previous work, and, in fact, improved current practice by adding heuristics to check the results 3 4. Even so, the links (between program code commits and bug reports) thus extracted cannot be guaranteed to be correct, as they are reliant on voluntary developer annotations in commit logs.
In prior work, we have shown that such data sets are plagued by quality issues 3; furthermore, these issues (e.g., incompleteness, bias, etc.) adversely affect applications and algorithms which rely on such data 10. We defined two types of bias: bug-feature bias, where only the fixes of certain types of defects are linked, and commit-feature bias, where only the certain kinds of fixes, or fixes to certain kinds of files, are linked. In addition to these data quality issues, many researchers make questionable process assumptions: for instance they assume that all the relevant bugs of a software product are actually reported in the bug tracking database of the project. To truly understand defect-reporting bias and verify such assumptions, we must uncover the ground truth: we must analyze completely (at least a time-window of) the commit version history of a project, and precisely identify all the commits that are defect fixes, and those that are not.
To get at ground truth requires skill, knowledge and effort: one must compare successive versions, understand the changes, identify any relevant reported bugs in the repo, and establish a link when possible. This process must be repeated until we have a large enough sample for statistical analysis. This is costly, difficult, and time-consuming.
Linkster is a convenient, interactive tool, integrating multiple queryable, browseable, time-series views of version control history and bug report history. Linkster enables an expert to quickly find and examine relevant changes, and annotate them as desired; specifically, LINKSTER makes it easy to find defect-fix commits. We engaged an expert Apache core developer, Dr. Justin Erenkranz, to use Linkster to manually annotate 6 full weeks (including 493 commit messages) of the Apache history. This case study helped us to
improve the tool, and yielded a trove of data to examine three research questions.
Traditionally, researchers have made several assumptions about the bug fixing, reporting, and linking phenomena. The first two research questions reflect general internal validity concerns that arise when using linked bug data for software engineering research.
**RQ 1:** Do the bug reporting and fixing practices of developers correspond to the assumptions commonly made by researchers?
Second, researchers have tended to gloss over the issue of whether automated tools that find links between commits and bug reports have false-positives or false-negatives.
**RQ 2:** How well does the automated approach of finding links between commits and bug reports work?
Finally, the linked set of bug-fixing commits is a sample of the full set of bug-fix commits. We can check and see if this sample is biased in any noticeable way.
**RQ 3:** Is there any evidence of systematic bias in the linking of bug-fix commits to bug reports?
To our knowledge, the only published study on this question is by Aranda and Venolia [1]; they analyzed the completeness and degree of truth in software engineering datasets and provided a partial answer to RQ1 (see Sub-Section 2.2). Most studies do not even address data quality issues.
In addition, we were able to qualitatively explore how the Apache project actually uses software engineering tools such as bug tracker and version control systems, yielding some rather surprising observations.
We begin with a discussion of related work (Section 2), followed by an overview of the tools and processes (Section 3) used in the Apache HTTP server project. We then present (Section 4) a description of Linkster, and details of the case study procedure evolving an Apache core developer (Section 5). In Sections 6 and 7, we present our findings, which we summarize briefly below:
**Finding 1:** A so-called “bug” is not always a bug: neither is a “commit” always a commit. In other words: in Apache, the most important bugs are not handled in the bug tracker but mentioned in the mailing list system; and only a fraction of commits actually pertain to program changes (RQ1).
**Finding 2:** We compared the manual annotations with data produced by automated linking (viz., for false-positives or false-negatives); the automated approach finds virtually all the commit log messages which contain a link to the bug tracking database (RQ2). Sadly, however, many defect-fix commits are un-identified in the commit log, and thus are invisible to automated approaches.
**Finding 3:** In the manually annotated sample, we find strong statistical evidence that different bug-fixers vary in their linking behavior. Investigating further, we find anecdotal evidence suggesting that factors such as experience, ownership and the size (number of files) of the commit affect linking behaviour. We also find that reporting bias affects the performance of a bug prediction algorithm (BugCache). Given the small size of the manually annotated sample, the evidence here is mostly suggestive rather than statistically significant; however, it points out the strong need for further studies—for if this type of reporting bias is confirmed as a widespread problem, this is of serious, fundamental concern to all empirical research that uses this type of linked bug-fix data.
## 2. RELATED WORK
Areas closely related to this research include data extraction and integration, data quality in software engineering, data verification in software repositories, and our own previous work on data quality effects on empirical engineering.
### 2.1 Data Extraction and Integration
Software engineering process data such as bug reports and version control log files are widely used in empirical software engineering. Therefore, the extraction and integration of this data is critical.
Fischer et al. [14] presented a Release History Database (RHDB) which contains the version control log and the bug report information. To link the change log and the bug tracking database, Fischer et al. searched for change log messages which match to a given regular expression. Later, they improved the linking algorithm and built in a file-module verification [13]. A similar approach to link the change log with the bug tracking database was chosen by other researchers. All of them used regular expressions to find bug report link candidates in the change log file (e.g., [32][31][30][54][34][30]).
In [19], we presented a step-by-step approach to retrieve, parse, convert and link the data sources. We improved the well-established prior art, enhancing both the quality and quantity of links extracted.
### 2.2 Data Quality in Software Engineering
As discussed in [16], empirical software engineering researchers have considered data quality issues. Space limitations inhibit a full survey, we present a few representative papers.
Koru and Tian [21] surveyed members of 52 different medium to large size Open Source projects with regards to defect handling practices. They found that defect-handling processes varied among projects. Some projects are disciplined and require recording of all bugs found; others are more lax. Some projects explicitly mark whether a bug is pre-release or post-release. Some record defects only in source code; others also record defects in documents. This variation in bug datasets requires a cautious approach to their use in empirical work. Liebchen et al. [22] examined noise, a distinct, equally important issue.
Liebchen and Shepperd [23] surveyed hundreds of empirical software engineering papers to assess how studies manage data quality issues. They found only 23 that explicitly referenced data quality. Four of the 23 suggested that data quality might impact analysis, but made no suggestion of how to deal with it. They conclude that there is very little work to assess the quality of data sets and point to the extreme challenge of knowing the “true” values and populations. They suggest that simulation-based approaches might help.
Bettenburg et al. [7][8][9] provided first analysis of bug report quality. They investigated the attributes of a good bug report surveying developers and used it to develop a computational model of a bug report quality. The resulting model allowed to display the current quality of a defect report whilst typing. Hooimeijer et al. [16] also analyzed the
quality of defect reports and tried to predict whether the defect report will be closed within a given amount of time.
Chen et al. [12] studied the change logs of three Open Source projects and analyzed the quality of these log files. In [4] we surveyed five Open Source and one Closed Source project in order to provide a deeper insight into the quality and characteristics of these often-used process data. Specifically, we defined quality and characteristics measures, computed them and discussed the issues arose from these observations. We showed that there are vast differences between the projects, particularly with respect to the quality of the link rate between bugs and commits.
Aranda and Venolia [1] provided a field study of coordination activities around bug fixing, based on a survey of software professionals at Microsoft. Specifically, they studied 10 bugs in detail and showed that (i) electronic repositories often hold incomplete or incorrect data, and (ii) the histories of even simple bugs are strongly dependent on social, organizational, and technical knowledge that cannot be solely extracted through the automated analysis of software repositories. They report that software repositories show an incomplete picture of the social processes in a project. While they studied 10 bugs in detail, we focus on commit history: we employed an expert, supported by a specially-designed tool to fully annotate a sample of 493 commits. This data helped us uncover a) some of the weaknesses of software repositories as well as b) anecdotal evidence of systematic bias in bug-fix reporting.
2.3 Studying Bias
Papers in empirical software engineering rarely tackle data quality issues directly (see discussion earlier in this section); our earlier work is an exception. In [2] and [10] we investigated historical data from several software projects, and found strong evidence of systematic bias. We then investigated potential effects of “unfair, imbalanced” datasets on the performance of prediction techniques.
Ideally, all bug-fixing commits are linked to bug reports; then empirical research would consider all type of fixed bug reports. However only some of the fixed bugs have links to the bug-fixing commits. This raises the possibility of two types of bias: bug feature bias, where only certain types of bugs are linked, or commit feature bias, whereby only certain types bug-fixing repairs are linked. Either type of bias is highly undesirable. With access to all the fixed bugs, and the linked bugs, we could check for bug feature bias. Our study [10] suggested that bug feature bias does exist, and also that it affects the performance of the award-winning BugCache defect prediction algorithm [19]. In this work, we have a fully annotated list of commits for the first time, thus achieving “ground truth” for a subset of the Apache dataset, and thus we can analyze the data for commit feature bias.
In summary: a few studies explicitly consider the quality of systematic bias in the data. This study, in contrast, explores the implications of this behavior by attempting to unearth the ground truth by enlisting a core developer to annotate all commits, and thus seek out quality and bias issues.
3. CASE STUDY: APACHE
The APACHE HTTP SERVER is an open source software system developed under the auspices of the Apache Software Foundation. APACHE is the most popular web server on the Internet, serving over 55% of all websites [26]. APACHE is also one of the most popular Open Source projects among researchers. It is widely used in current empirical software engineering research (e.g., [23] [25] [20] [8] [18]), and thus a good subject for an in-depth examination of data quality.
3.1 Project Tools
Like many other Open Source projects, APACHE uses the BugZilla[1] bug tracker and the SVN[2] version control system. In addition, the Apache Software Foundation provides officially maintained git[3] mirrors for all projects. The APACHE project allows free access to the contents of all these tools. APACHE also maintains a public mailing list for developers and APACHE users to discuss issues of concern.
3.2 Data Gathering and Integration
We retrieved, processed and linked the APACHE HTTP WEB SERVER process data as presented in [3]. Basically, we downloaded all BugZilla bug reports and SVN version control log files. Then, we scanned each commit log message for indications of fixing a bug using a set of heuristics; typically we look for bug report numbers in log messages. This leads to a set of automatically extracted links between program code commits and bug reports. This set of links is validated using another set of heuristics (op cit).
3.3 Apache Dataset
With our own (rather modest) resources, we could only completely evaluate and manually verify a subset of the original APACHE dataset. Therefore, we had to sample the original dataset. There were two choices: random sampling or temporal sampling.
Random sampling requires some rationale for selecting a sample—e.g., prior knowledge of the distribution of the relevant co-variates to the study, so that a sample representative of the population could be chosen. It is difficult to decide a priori what such co-variates might be, let alone their distribution. So, we chose to perform temporal sampling.
<table>
<thead>
<tr>
<th>Table 1: Apache Datasets: Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dataset</td>
</tr>
<tr>
<td>Considered time period</td>
</tr>
<tr>
<td></td>
</tr>
<tr>
<td>#Bug reports</td>
</tr>
<tr>
<td>#Fixed bug reports</td>
</tr>
<tr>
<td>#Linked bug reports</td>
</tr>
<tr>
<td>#Duplicate bug reports</td>
</tr>
<tr>
<td>#Invalid bug reports</td>
</tr>
<tr>
<td>#Different bug reporters</td>
</tr>
<tr>
<td>#Commit messages (transactions)</td>
</tr>
<tr>
<td>#Empty commit messages</td>
</tr>
<tr>
<td>#Linked commit messages</td>
</tr>
<tr>
<td>#Different developers</td>
</tr>
</tbody>
</table>
1See http://www.bugzilla.org/
2See http://subversion.tigris.org/
3See http://git-scm.com/
4We define “fixed” bug reports as bug reports that have at least one associated fixing activity (which means a status change to “fixed”) within the considered time period.
With this approach, we chose to verify all the commits in a given period. With complete results for that period, we can then revisit our earlier results and judge the quality against this limited but complete and accurate temporal sample. To find a “typical” period for our evaluation dataset we analyzed the whole original Apache dataset based on week-long epochs. Then, we chose a period of 6 consecutive weeks that was as representative as possible to the overall original Apache dataset in terms of its descriptive process statistics (e.g., similar proportions of bugs and commits). Table 1 lists some basic software process statistics for both—the original and the evaluation—Apache datasets including the finally defined time-frames.
4. LINKSTER
The use of LINKSTER simplified our domain expert’s task, greatly accelerating an otherwise tedious, repetitive and inconvenient sequence of invocations of multiple tools. Figure 1 shows a screenshot of LINKSTER, showing windows containing three kinds of information: commit transactions including all the changed files (a), bug reports (b), and diff & blame information for all of the lines in a file before and after a particular commit (c).
LINKSTER requires access to a version control system for file content and a database (local or remote), containing the raw mined repository and bug tracking information. We use git as our backend repository format, given its increasing popularity, and ready availability of tools supporting conversion from competitors such as CVS, SVN, etc. However, for convenience, LINKSTER displays the revision IDs from the original repository. All notes, links, and annotations (explained below) made by the user are also recorded in the database to facilitate use and analysis thereof after annotation. LINKSTER efficiently displays, integrates, and allows inspection and annotation of information from all data sources. LINKSTER is written in Python, using the PyQt widget toolset and has been written with portability in mind. We have successfully run it on Linux, OS X, and Windows.
To our knowledge, no other tool provides integrated project information in combination with functionality to annotate / link commits. Hipikat [32] which was developed at UBC, is similar in that it creates links between different types of software artifacts. However, these links are based purely on heuristics and Hipikat functions as a recommender system rather than a browsing and annotation system.
Other tools such as EvoLens, SoftChange, or Shrimp provide only part of the functionality, but all existing tools have goals other than expert commit annotation.
SoftChange [15] is a tool to aid software engineering research by visualizing data. Similar to LINKSTER, SoftChange integrates data from multiple sources such as version control systems, releases, and bug databases. However, SoftChange uses visualizations (usually plots) to answer questions, (e.g., how many bugs are closed in each time period?) and does not allow annotation of data as LINKSTER does.
EvoLens [29] helps developers to understand the evolution of a piece of software by visualizing the software as well as metrics of the software over time. The visual nature across time facilitates identifying design erosion and hot spots of activity. LINKSTER does not leverage advanced visualization techniques and integrates multiple types of data rather than just source code information.
Shrimp [24] integrates and visualizes source code, docu-
mentation (Javadoc), and architectural information to aid source code exploration. Linkster is more concerned with process related artifacts, (e.g., changes, discussions, bug reports, and fixes) than understanding the source code itself.
4.1 Commit Information
Figure 1a shows the Commit Information Window of Linkster. The top (1) contains a list of commits that satisfy some query, e.g., commits within a time window or changes made by a particular author. Each line shows the revision identifier (as used in the original repository), commit time, author, and the first line of the commit message. The entire commit message is shown in a tooltip when the mouse hovers over an entry.
When a commit entry in the list is selected, the metadata is updated in the bottom half (2). The list of files modified in the commit (3) is also displayed. Double clicking a file brings up the Blame & Diff Information for the file allowing the user to examine the exact changes that were made. For annotation purposes, the user may select the reason(s) for the commit by checking boxes (4) or drag and drop (or remove) a bug record from the Bug Information Window into the list of bug IDs (5), which is populated with the set of automatically identified links between the commit and bug records. Finally, the user may enter free form notes for the commit (6).
4.2 Bug Information
Figure 1b contains the Bug Information Window. The top portion (7) is a scrollable list of bugs from the bug database. Each entry contains the bug ID, the date of creation, and a one line summary of the bug. Hovering over an entry shows the bug severity in a tooltip. Any of these entries may be dragged to the bug IDs list (5) in the commit information window to indicate a commit that is associated with the bug.
Selecting a bug entry populates the bottom half of the window with detailed information. The left side (8) contains short attributes of the bug, while the right side (9) displays the full bug description followed by all of the comments in chronological order with author and date. Clicking on the Bug Activity tab (10) displays a list (not shown) of all changes to the bug record, such as assigning the bug to a developer or marking a bug as closed. Each entry indicates when the change was made and who made it along with old and new values for the changed field as appropriate. Finally, clicking on the Fixing Files tab (11) presents a list (not shown) of all of the commits to files that are associated with the fix of the bug. This list is comprised of files automatically or manually linked to the bug. Double clicking on any file in this list will bring up a blame & diff window for the commit.
4.3 Blame & Diff Information
Figure 1c shows the Blame & Diff Information Window for the changes to a file in a particular commit. The left view (13) shows the content of the file prior to the change, and the right view (14) shows the content after the change. Removed lines are prefixed with “−” and are highlighted red, and added lines are in green with a “+” prefix. Each line is also prefixed with revision identifier of the commit that introduced the line. Selecting a line highlights all other lines introduced in the same commit, and also updates the metadata area (12) with information about that commit. This can help the user learn why, when, and by whom, the line was originally added. If additional information is desired, double clicking a line will bring up a new Blame & Diff window for the commit which introduced the line (if, for example, one desires to see why a line that was removed in one revision was originally added in a prior revision). An annotator can, thus, gradually step back through version history.
The views are synchronized such that scrolling up, down, left, or right in one view causes the other to change accordingly. The thumbnail view (15) graphically shows the differences for the entire file with red indicating removed lines and green, added lines. Clicking on a location in the thumbnail view will cause the pre and post views to jump to that location, making it easier to identify and examine changes in larger files.
5. APACHE DATA EVALUATION
To address our research questions, we began our evaluation with the creation of an evaluation dataset, as defined in Section 3.3. Armed with Linkster to facilitate browsing and annotation, we engaged the services of an informant: an experienced APACHE developer, Dr. Justin Erenkrantz, to manually annotate a temporal sample of commits using Linkster. Clearly, the quality of this completely annotated evaluation dataset is predicated on the expertise of the annotator. Justin is a core developer of the APACHE HTTP Web server project (since January 2001), the President of the Apache Foundation and serves on the Foundation’s Board of Directors. He also develops for Apache Portable Runtime, Apache flood and Subversion.
Using Linkster, Justin annotated each commit, to flag it as a bug fix, an implemented feature request, a maintenance task or other. With this information, we obtain fully annotated commit data, providing a complete picture of all the changes during the given period and how/why/by whom these changes were made. This data can be used to verify our automated linking approach (which includes mainly bug fixes and some feature requests). Indeed, annotating program code commits dating back months or years in the past is a challenge, even for an experienced core developer like Justin. Linkster was very helpful, providing an integrated view of all the relevant information. Based on the log message, the changed files and the file diffs of the changed files, Justin was able to annotate all commits, and, in most cases, provided additional information about the commits.
Justin’s familiarity with the APACHE project gives us confidence that the results of our evaluation can be trusted. In addition, detailed discussions and interviews with him revealed facts about the tools and processes used in the APACHE HTTP WEB server project, and also ideas for improving Linkster.
6. RESULTS
All 493 commits in our selected temporal sample were annotated. In addition to the annotation into the four categories above: bug fix, feature request, maintenance/refactoring, and other, our informant helped us further sub-classify the commits. Table 2 summarizes the annotation results including the sub-classification. Note, a single commit can have many annotations, e.g., a commit may be annotated as both a “bug fix” and a “feature request”.
\[ \text{See } \text{http://www.erenkrantz.com/} \text{ for more details.} \]
Finding 1. Not all fixed bugs are mentioned in the bug tracking database. Some are discussed (only) on the mailing list.
As shown in Table 2, we have 82 bug fix related commits in our evaluation dataset. 32 of them (bug report) are directly related to the bug tracking database. 7 other commits contain a bug-fix, but are not the initial bug fix commit rather than a merge of versions which contain bug fixes indirectly (bug report (merge)). This means, that only 47.6% of bug fix related commits (32+7) are documented in the bug tracking database. For 13 other commits (16% of total) identified by Justin as bug fixes, there are related discussions in the Apache mailing list. This leads to the discouraging observation that many bugs never appear in the bug tracking database, but rather are only discussed on the mailing list. Such a discussion often includes the bug fix provided by a non Apache core developer. According to Justin, these bugs are often the very important bugs especially because of the high attention by Apache developers and the core community on the mailing list. Note also that reporting some types of bugs (e.g., security related ones) on the mailing list is a practice explicitly requested by the Apache Foundation.
Unfortunately, even knowing about the mailing list bugs, it is hard to i) identify and ii) automatically mine them or extract information similar to a bug report stored in the bug tracking database (such as status changes, priority, severity, etc.). Apache SVN revision #291558 (see Figure 2), for instance, is related to a bug discussed on the mailing list. If one were to inspect the mailing list message, one would find almost no evidence that this was a bug fix.
Finally, Justin found 17 other bug-fixing commits (21%) which have neither an associated bug report or mailing list message. This phenomenon, of under-reporting of bugs, is a big problem. If important bugs are excluded from experimental data (i.e., many bugs are left out) then the effectiveness of defect prediction models and the validity of statistical studies (which rely on them being in the bug tracking database) may be threatened. This leads to the conclusion, that not all fixed bugs are reported as bugs in the bug tracking database, or in other words: bugs go “incognito”.
6.2 Backport Incognito
In the Apache HTTP web server project only a few developers are allowed to commit to an Apache release version: thus a bug-fix on one release may actually have to be committed by someone else to an older or different release. Typically, this process works as follows. First, a developer fixes a given bug and commits the new version to the current version under active development (also known as the “trunk”). Ideally s/he also refers to the related bug report in the commit log. Next, at least two other developers review the changed code, verify the changes and vote either for or against the fix (this step is related to the voting commits as shown in Table 2 and 3). Finally, if the votes are positive, the fix is committed (or merged) to Apache release versions, which is called a backport. As a result of this process, we might find several different commits in the version history, that fix the same bug.
Finding 2. To fix a bug in an Apache release, multiple similar commits by different developers are needed.
Unfortunately, backport commits are not that easy to identify by existing linking algorithms and heuristics; frequently, while the log message for original commit to the trunk refers to the bug report, the backport commit log does not. To worsen matters, after the bug is actually closed,
there is a rigorous review, verification and voting process before the backport is accepted and committed. Therefore, the time difference between the backport commit and the status change (to fixed) on the bug report may rise to several days, which again, makes it difficult to link the bug with the commit. As a result, automated linking algorithms will largely ignore backport fixes. Arguably, these are fixes are very important: often they are involved in post-release failures. They should not be ignored by researchers engaged in hypothesis testing or defect prediction work. Also, finding them may require extensive, high-expertise combing through commit histories.
Finding 4. Even if we annotate all commits, the cause of a commit still remains unspecified in some cases.
Table 2 and 3 show the annotation, sub-classification and process-oriented classification of all the commits in our evaluation dataset. Based on the values in Table 3 for 110 commits (22.3%) we have a process specific annotation of other. The reason for these commits, therefore, is not justified by one of the APACHE software engineering core tasks.
In addition, most of the commits are not justified by a bug fix or feature request rather than for documentation (32%), voting (5.3%) or releases (8.9%). Only 37.1% of all commits have a functional impact on the software product (feature requests and bug fixes including all backport), which leads us to the conclusion that not all commits are commits that actually change the software.
For additional information to the quality and characteristics of the version control data, we refer to our previous work presented in [4].
6.4 Commits Incognito
In earlier work [3, 4, 10, 5], we reported a linking algorithm whose performance was found to be best-in-class. The fully annotated data provided the first known oracle to evaluate linking algorithms, and so we evaluated ours.
Finding 5. The algorithm (op cit.) finds most of the commit log messages that the developers linked to bugs reported in the bug tracker, subject to the time constraints used by our algorithm.
In the chosen temporal sample, our linking algorithm found 29 links between the commit messages and the bug tracking database. Justin also identified all these links; we thus found no false-positive links in our evaluation dataset. In addition to these, Justin found 10 additional links. Seven did not satisfy our heuristic for valid links (time constraint of ±7 days between commit and status change on the bug report), and so our algorithm rejected them as invalid links. Hence, we found three false-negative links in our evaluation dataset. The seven invalid links resulted from backport commits (as explained earlier, Sub-Section 6.2). These backports corresponded to bug-fix links in the original trunk which in fact, were successfully discovered by our algorithm.
Unfortunately, as we elaborated before, even with a high linking rate between the commit messages and the bug tracker, only a subset of the fixed bugs are considered. Hence, bugs discussed on the mail discussion system are often left out by automated linking approaches.
6.6 Performance of LINKSTER
LINKSTER performed mostly as expected and Justin was able to annotate all the commits (493) of our evaluation sample dataset in one working day. In the discussions with Justin, we found some minor issues, which were promptly remedied. In addition, we found that the most important bugs are discussed in the mailing list system only. Therefore, LINKSTER has been extended to support browsing of messages from development mailing lists and also enables linking them to both bug reports and repository commits.
6.7 Threats to Validity
This sub-section discusses external and internal threats to validity that can affect the results reported in this section.
Threats to external validity. Can we generalize from the results based on the APACHE HTTP WEB SERVER dataset to other datasets? Software engineering tools and processes vary in different projects and, therefore, our findings based on APACHE may not generalize. However, our findings indicate that developers may use software process support tools for various goals not envisioned by its original developers (such as version control systems for voting or mailing list systems for bug reporting). It seems prudent to assume that the APACHE project is not a complete exception and that, therefore, the data used in studies of other projects may also lack important information. Another threat is the use of a single annotator (Justin). Getting the same data annotated by other developers, and checking agreement, would have been better; we hope to do this in future work.
Threats to internal validity. Did we choose our evaluation dataset well, and properly analyze it? We chose our time-frame carefully; however, it may not properly represent the original APACHE dataset. The annotation and classification were performed carefully by a very experienced APACHE core developer. Still, there may be errors. Nonetheless, according to Justin, the interesting practices of the APACHE developers are by no means exceptional to this time period.
7. COMMIT-FEATURE BIAS, REVISITED
The manual annotation effort indicates that many bug fixes are not identified in the commit logs, and thus are completely invisible to the automated linking tools used to extract bug-fix data. Thus the linked bug-fix commits are a sample of the entire group of commits. However, samples thus extracted have been central to many research efforts.
The natural question is: is this sample representative, or biased? We seek to test for the two kinds of bias: bug feature bias, whereby only fixes to certain kinds of bugs are linked, and commit feature bias whereby only certain types of commits are linked [10]. Earlier, with access to the entire set of fixed bugs, and the subset of linked bugs, we could check for (and find) bug feature bias; lacking access to a fully annotated set of commits that tells us which commits are bug fixes, we were previously unable to check for commit feature bias.
Now, with a fully annotated temporal sample of commits, we can indeed check for commit feature bias. Commit features are properties of the file and its revision history, such as size, complexity, authorship, etc.. These are critical properties that have been studied in dozens of papers that test theories of bug introductions; they are also the features used for bug prediction. So it is important to test for commit feature bias, and evaluate its impact. In this section, we describe some findings related to commit feature bias, and its effect on a well-known bug-prediction algorithm (BugCache).
We remind the reader that our sample size (despite the time and effort required to gather even that much) is not big enough to realistically expect to find statistically significant support for answers to the questions discussed in this section. However, there are some takeaways: we do find statistical support for the answer to one question, and we do find some anecdotal answers for the other questions. Furthermore, actual bias along any of the lines discussed here would have a highly deleterious effect on the external validity of theories tested using only the linked data. Most importantly, we hope to convince the reader that such studies are important and need to be repeated and conducted at larger scales.
7.1 Sources and Extent of Commit Feature Bias
The first question arises naturally from the fact that there are different individual developers, who may have different attitudes towards linking. The simplest and most obvious question is as follows:
**Do different developers show significantly different linking behaviour?** The anonymized table of developers’ linking behavior indicates that this is the case: (p ≃ 0.002).
<table>
<thead>
<tr>
<th>Name</th>
<th>Linked</th>
<th>Not Linked</th>
<th>Name</th>
<th>Linked</th>
<th>Not Linked</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>0</td>
<td>6</td>
<td>b</td>
<td>10</td>
<td>5</td>
</tr>
<tr>
<td>c</td>
<td>1</td>
<td>1</td>
<td>d</td>
<td>11</td>
<td>8</td>
</tr>
<tr>
<td>e</td>
<td>0</td>
<td>3</td>
<td>f</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>g</td>
<td>0</td>
<td>3</td>
<td>h</td>
<td>0</td>
<td>5</td>
</tr>
<tr>
<td>i</td>
<td>2</td>
<td>7</td>
<td>j</td>
<td>0</td>
<td>3</td>
</tr>
<tr>
<td>k</td>
<td>0</td>
<td>2</td>
<td>l</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>m</td>
<td>0</td>
<td>2</td>
<td>n</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>o</td>
<td>0</td>
<td>1</td>
<td>p</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>q</td>
<td>4</td>
<td>0</td>
<td>Total</td>
<td>26</td>
<td>52</td>
</tr>
</tbody>
</table>
We now hypothesize several different specific possible motivational theories of linking behavior. In several cases, there was a visually apparent signal, in boxplots, albeit none that were statistically significant. The results are shown in Figure 3. We list them below, but we caution the reader to interpret all these findings as at best anecdotal. However, it is important to bear in mind that actual bias influenced by any of the processes hypothesize below would be very damaging to the external validity of theories tested solely on the linked data.
**Does the experience of the author(s) whose code is being fixed influence linking behaviour?** We hypothesized that the quest for greater reputation might incentivize people to link fixes when the code under repair belonged to an experienced (and thus more reputable) person. We measured the fixed code’s “author reputation” as the geometric of the prior commit experience of everyone who contributed to the fixed code. The left most boxplot in Figure 3 is weakly suggestive that fixes made to code with more experienced authorship are more likely to be linked.
**Does the number of files involved in the bug fix matter?** If more files are repaired in a bug fix, perhaps the fix is more “impactful”; this might motivate the fixer to more carefully document the change. In fact, the boxplot (second from left in Figure 3) is suggestive that this might be the case, with all the unlinked fixes being single-file fixes.
**Are more experienced bug fixers more likely to link?** We might expect that more experienced developers behave more responsibly. We measure experience as the number of prior commits. The boxplot (second from right) suggests support for this theory, with a noticeably higher median for the linked case.
**Are developers who “own” a file more likely to link bug-fixes in that file?** One might expect that people fixing bugs in their own files are more likely to behave responsibly and link; on the other hand, there is a anti-social reputation-preserving instinct that suggests that they may be less likely to link. We measure ownership as the proportion of lines in the file authored by the bug fixer. Indeed, the boxplot visually supports the “anti-social” theory.
We created plots to evaluate two other theories: Are bug
1.0
0.8
1000
2
Not Linked
1500
6
0.6
2000
4
Not Linked
13x148
Experience (Commit Count)
Number of Files Committed
42x82
fixes to bigger files more likely to be linked? ... and Does the prior experience of the file owner influence linking behaviour? and found no informal visual evidence supportive of these theories.
7.2 Practical Effects: BugCache Revisited
The above analysis shows that the extent of bias in the data is significant and that the effort of finding the ground truth (e.g., through manual annotation with LINKSTER) leads to important insights. But do these insights translate to practical impact? In this sub-section we investigate the impact of approaching ground truth in terms of changes in the accuracy of the award-winning BugCache algorithm. To that end, we repeated our experiment showing the impact of bias using Apache data. Specifically, we departed from two different datasets: The first dataset (called A below) contained all 1576 bugs introduced in the Apache 2.0 branch. The second one included the additional 65 bugs found by Justin (called J). Table 4 shows the resulting accuracies for training and predicting on each combination of these two datasets.
Consider training on the extracted data A and predicting on the same data. This provides a baseline accuracy of 0.875. If the prediction is, however, performed on the dataset representing ground truth for the period of manual annotation A ∪ J then the accuracy falls to 0.870. We accede that due to the limited manually annotated period the difference—like all the differences in the table—is not significant. But as the following shows we can recognize a tendency. Alternatively, consider adding the manually annotated bugs to the training set (i.e., training on A ∪ J). In each possible prediction target (i.e., A, J, and A ∪ J) we find that the availability of the additional information actually leads to an improvement in prediction accuracy. This is especially impressive where the prediction target is A as it shows that the manually annotated bugs actually contain information relevant to the automatically extracted ones helping BugCache to find four additional bugs.
Table 4: BugCache Prediction Quality
<table>
<thead>
<tr>
<th>Learning Set</th>
<th>Test Set</th>
<th>Accuracy</th>
<th>95% Confidence Interval</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>A</td>
<td>0.875</td>
<td>0.858, 0.890</td>
</tr>
<tr>
<td>A</td>
<td>A ∪ J</td>
<td>0.870</td>
<td>0.852, 0.885</td>
</tr>
<tr>
<td>A</td>
<td>J</td>
<td>0.738</td>
<td>0.620, 0.830</td>
</tr>
<tr>
<td>A ∪ J</td>
<td>A</td>
<td>0.878</td>
<td>0.860, 0.893</td>
</tr>
<tr>
<td>A ∪ J</td>
<td>A ∪ J</td>
<td>0.874</td>
<td>0.857, 0.889</td>
</tr>
<tr>
<td>A ∪ J</td>
<td>J</td>
<td>0.785</td>
<td>0.670, 0.867</td>
</tr>
</tbody>
</table>
8. DISCUSSION AND CONCLUSIONS
In this paper, we analyzed three main research questions and tried to find “ground truth” in the commit annotations of a very popular software engineering dataset. We used temporal sampling to define an evaluation subset of the original Apache dataset and manually annotated all commits, with the assistance of an Apache core developer and the use of Linkster.
As presented in our previous work, bias in empirical software engineering datasets may affect results of applications which rely on such data. Unfortunately, based on our data verification, we found that things are even worse: our findings cast doubt on some of the core assumptions made in empirical research. Specifically:
1. Bugs often go incognito as they are not always reported as a bug in the bug tracker but, e.g., in mailing lists, and
2. commits not always clearly change the functionality of the program.
Specifically, we showed that not all fixed bugs are reported in the bug tracking database and most of the commits (62.9%) are not related to a bug fix or feature request (which would introduce a program change) rather than for documentation (32%), voting (5.3%), or releases (8.9%). In addition, we presented the curious case of backport commits and the challenging impact-of-defect vs. cause-of-defect problem. Both issues have an impact on software engineering datasets. Consequently, even though automated linkage tools are able to connect a remarkable number of commits to bugs reports, many bugs—sometimes the most critical ones—never show up in the bug tracker and are, therefore, not linked. This raises new issues concerning the validity of studies that rely on version control and bug report data only—beyond what we reported earlier. We presented a detailed examination of the bias in automatically linked set, when compared to the manually linked set. Especially notable is the significant variation in linking behavior among developers, and the anecdotal evidence suggesting that bug-fixing experience and code ownership play a role in linking behaviour. We also showed that BugCache has a strong tendency to miss predictions if it is not trained on ground truth.
Another implication of the work presented here is that empirical software engineering studies will need to take the whole software development social eco-system (revision control system, bug tracking database, mailing list systems, email discussions, discussion boards, chats, etc. as well as these data from other, related projects) into account in order...
to elicit a more complete picture of the underlying development process. This would allow to capture the nature of some of the bugs and commits that our informant tediously collected manually.
Nonetheless, this study is only a first step towards quality-approved datasets and we acknowledge that we were only able to verify a small subset of the overall Apache dataset. Therefore, we hope to influence the community to seek more ground truth for more software engineering datasets. Granted, such work would entail a significant manual labor, but, undoubtedly, the resulting valuable improvements in data fidelity will serve the community well in years to come. We seek mechanisms for fostering this community effort, and welcome suggestions from readers to this end.
Acknowledgment
Many thanks to Dr. Justin Erenkrantz for the time he spent in Zurich annotating commits and providing feedback to the Apache dataset and Linkster. This work was supported by Zurich Cantonal Bank (Bachmann), U.S. NSF SoD-TEAM 0613949 and an IBM Faculty Fellowship (Bird, Rahman, and Devanbu), and Swiss National Science Foundation number 200021-112330 (Bernstein).
9. REFERENCES
|
{"Source-Url": "https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/bachmann2010mlb.pdf", "len_cl100k_base": 10321, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34245, "total-output-tokens": 12993, "length": "2e13", "weborganizer": {"__label__adult": 0.0003228187561035156, "__label__art_design": 0.00030112266540527344, "__label__crime_law": 0.00029754638671875, "__label__education_jobs": 0.0013475418090820312, "__label__entertainment": 4.732608795166016e-05, "__label__fashion_beauty": 0.00015020370483398438, "__label__finance_business": 0.00023496150970458984, "__label__food_dining": 0.00023293495178222656, "__label__games": 0.0005006790161132812, "__label__hardware": 0.0004811286926269531, "__label__health": 0.0003812313079833984, "__label__history": 0.00020313262939453125, "__label__home_hobbies": 7.522106170654297e-05, "__label__industrial": 0.00026869773864746094, "__label__literature": 0.0002675056457519531, "__label__politics": 0.00019097328186035156, "__label__religion": 0.00037288665771484375, "__label__science_tech": 0.00969696044921875, "__label__social_life": 9.679794311523438e-05, "__label__software": 0.005962371826171875, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.00025177001953125, "__label__transportation": 0.00035190582275390625, "__label__travel": 0.0001494884490966797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54683, 0.08191]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54683, 0.30349]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54683, 0.91194]], "google_gemma-3-12b-it_contains_pii": [[0, 4829, false], [4829, 11251, null], [11251, 17894, null], [17894, 21389, null], [21389, 28006, null], [28006, 31638, null], [31638, 35323, null], [35323, 42456, null], [42456, 47723, null], [47723, 54683, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4829, true], [4829, 11251, null], [11251, 17894, null], [17894, 21389, null], [21389, 28006, null], [28006, 31638, null], [31638, 35323, null], [35323, 42456, null], [42456, 47723, null], [47723, 54683, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54683, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54683, null]], "pdf_page_numbers": [[0, 4829, 1], [4829, 11251, 2], [11251, 17894, 3], [17894, 21389, 4], [21389, 28006, 5], [28006, 31638, 6], [31638, 35323, 7], [35323, 42456, 8], [42456, 47723, 9], [47723, 54683, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54683, 0.14978]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
e8026444953a79f08ffdcba9ac82d6534da246e9
|
Introduction and Overview
Software Engineering is an expensive and time consuming task. One strategy for reducing the effort in application building is reuse of work products from other projects. The product family approach promises to maximize reuse in a systematic way. Product family development focuses on the creation and maintenance of a whole set (family) of software products. It has recently gained much interest in various application domains including electronic commerce, information systems, medical systems and telecommunication systems. Product family development differentiate between the creation and the maintenance of system assets (development artefacts) which are common to the various application systems and the assets that are specific to particular applications. In contrast, research in traditional software engineering disciplines has mainly focused on “one of a kind” systems.
The principal ideas and solutions developed for “one of a kind” systems in industry and research may still be applicable to product family development. But, product family development requires an adjustment and extension of those concepts, especially in the areas of requirements engineering, software architecture and components. Software architectures and distributed components, for example, have to be built under the premise that evolution of the product family is inevitable and they thus have to provide solutions for actually mapping variable parts onto interfaces and code. Or, requirements engineering must take the results of the domain analysis into account when defining the application specific requirements. User needs should, whenever possible, be mapped to requirements already satisfied by the core architecture to guarantee a successful reuse of other product family assets.
This Dagstuhl Seminar convene twenty-six leading practitioners and researchers from various disciplines to cross-examine the effectiveness and the efficiency of product family based software system development. The seminar was mainly organised by the EUREKA/ ITEA Project ESAPS (Engineering Software Architectures, Processes and Platforms for System Families) in cooperation with the SEI (Software Engineering Institute, Carnegie Mellon University, PA, USA).
After overview talks on “the American view” on software product lines (by Linda Northrop, SEI, USA) and “the European view” on software product lines (by Frank v. d. Linden, Philips, Netherlands), requirements engineering (by Klaus Pohl, University of Essen, Germany), architectures for product families (by Paul Clements, Carnegie Mellon University, USA), variability in product families (by David M. Weiss, Avaya Communication, USA), scoping of product families (by Peter Knauber, Fraunhofer IESE, Germany), three main topics for the seminar where identified as a result of a brainstorming session and discussed in parallel working groups:
1. Product line adoption strategies and convincing business cases;
2. Managing variability in space and time for software intensive systems;
3. Economics and marketing issues of product lines
Thanks are due to the Dagstuhl Directorate for accepting this international event, and to the ITEA and local founding organisations for supporting the travel of the ESAPS project participants. Without the enthusiastic cooperation of all participants this workshop would not have been the success as we feel it has been. Special thanks go to the conveners of the working groups, and to our student Mathias Brandenburg for their support in organisation and session recording. Last, but definitely not least, our final thanks go to the cheerful people at Dagstuhl without whose support these event would be much more work and much less fun.
Essen, München, Pittsburgh, Eindhoven and Kaiserslautern, August 2001
*Klaus Pohl, Günter Böckle, Paul Clements, Henk Obbink, Dieter Rombach*
Agenda
Tuesday, April 17
13:45 – 15:00 Overview (Chair: Günter Böckle):
- Requirements Engineering (Klaus Pohl)
- Software Architecture (Paul Clements)
- Variability in Product Families (David Weiss)
- Scoping of Product Families (Peter Knauber)
15:30 – 16:30 Overview (Chair: Dieter Rombach)
- Product Families in ESAPS (Frank van der Linden)
- Product Families at the SEI (Linda Northrop)
16:50 – 18:00 Brainstorming Session
- Hot Topics for Parallel Working Groups
Wednesday, April 18
9:00 – 10:30 Focused participant presentation within each parallel working group:
- Product Line Adoption Strategies and Convincing Business Cases
- Managing Variability in Space and Time for Software Intensive Systems
- Software mass customisation
11:00 – 12:00 Parallel working groups (cont.)
13:00 – 16:00 Parallel working groups (cont.)
16:00 – … Excursion
Thursday, April 19
9:00 – 10:30 Summary presentations of the three parallel working groups
11:00 – 12:00 Parallel working groups (cont.)
13:30 – 18:00 Parallel working groups (cont.)
Friday, April 20
9:00 – 10:30 Summary presentations II of the three parallel workgroups
11:00 – 12:00 Closing Session
- Summary of Results
- Open Issues
- Publications
13:00 – 14:00 Closing Session (cont.)
14:00 End of the Seminar
Software Product Line Engineering Is Not That Difficult
Sergio Bandinelli
Parque Tecnol. de Zamudio, Zamudio, Spain
Technically speaking, Software Product Line Engineering is not that difficult. It is just common sense applied to software production. But we must have a serious communication problem, because we spent much of the time here finding ways to be more convincing.
I am sure that we have a good set of convincing business cases, Product Line Engineering will become the “natural” wide-spread way of constructing software – it’s just a question of time…
Quality Attribute Design Primitives
Len Bass
Carnegie Mellon University, USA
Many standard mechanisms exist for achieving quality attribute within a software system. For example, availability is typically achieved by having some form of redundancy either of data or of function, some form of communication to keep the redundant data or functions synchronized and some form of health monitoring to determine whether components are performing correctly. Other quality attributes have similar standard mechanisms.
We are engaged in cataloguing the mechanisms used to achieve the quality attributes of availability, performance, security, modifiability and usability. We have a draft list of approximately 50 such mechanisms.
Furthermore, quality attribute definitions are not operational. Availability, for example, is defined as the percentage of time that a system is available for use (with some caveats). This provides little guidance for requirements elicitation or for verifying that a requirement has been met. We are characterizing quality attribute requirements in terms of a collection of scenarios that have explicit definition of the stimulus that impacts the attribute and explicit possibilities for responses that can be translated into requirements. There are also approximately 50 such scenarios in our draft list.
Once we have these two lists, then we can provide arguments as to how the mechanisms enable the achievement of the particular responses when particular stimuli occur. These arguments are an aid to the system designer in choosing which mechanisms to use to achieve particular quality attributes.
A mechanism designed to achieve one quality attribute scenario has an impact on other quality attributes as well. That is, using the availability example, redundancy has a cost in terms of performance and modifiability and possibly security and usability. We call these side effects.
For each mechanism in our list, we are producing a description of how to analyse a system to see how well that mechanism achieves its quality attribute goals in the context of this system as well as to understand the side effects of each mechanism.
This is work in progress. The current status is that we have draft lists of attribute scenarios and mechanisms and analyses of several mechanisms in terms of both the attribute they are intended to achieve and their side effects.
System Families, a Business Driven Architectural Approach
Jesus Bermejo
Telvent Interactiva, Sevilla, Spain
Frequently, a system family framework has to provide solutions not for existing systems but for future systems and nowadays for rapidly evolving technological markets.
The technical solution has to be assessed in terms of its fitness for the strategic and business objectives. It has to facilitate the interaction between software and business processes.
The analysis of the variants both under a business and software perspective through specific views, the mapping between both and the detailed analysis of the software interfaces involved are identified as key activities in the global approach.
Challenges of Product Families
Günter Böckle
Siemens AG, Munic, Germany
This position paper establishes some theses and presents some questions about product family engineering (PFE). These theses and questions are considered by the author to be significant for further R&D in this field and worthwhile discussing at the workshop.
1. PFE will not reduce time to market... unless you are on the market with the first product of the family before your major competitors
- PFE where you first spend a lot of time doing domain engineering before your first product hits the market will make you lose the market
- The suggested process:
Thorough product (family) definition first, plan reference architecture with provision for architecture evolution secondly, develop then first product as third, and after that develop core assets in parallel with further products
- Research has to be performed for
- product definition
- fast reference architecture definition with inherent variability
- mapping requirements variability → feature variability → (reference) architecture variability → component variability
- partial architecture reuse and architecture recovery (for the most common case - plowed field)
- Include marketing and business administration people into the common work for PFE!
2. Derivation is the least known part of PFE - we have to develop methods or even a theory for product derivation!
- This includes methods for configuration management in PFE and generative methods
3. Making good use of variability modelling means late binding - but how can we get rid of the unwanted side effects of late binding?
- Late binding means that we may have lots of product interaction and feature interaction which occur at installation time or run time but that can only be tested with lots of effort. We have to
identify these product and feature interactions as part of PFE so that we do not introduce errors that cannot be identified and corrected later.
4. **The tools: how can we get easy-to-learn and easy-to-use tools for supporting PFE?**
- We have many tools for requirements engineering, software design, etc. But none are really satisfactory. There seems to be a trend towards the Rational tool suite. Will we get the same as we had before with operating systems or office software: MS Windows and MS Office as quasi-standards that nobody likes but everybody has to use because everybody else is using them?
- What can we do to avoid this from happening again?
- We need to establish the requirements for tools: what do we want them to do? Do not forget: the tools must be easy-to-learn! No company will allow their designers to spend weeks for learning to work with new tools while products have to be developed!
- How can we support e.g. traceability over objects in different tools?
- Provide e.g. tools with intelligent interfaces that specify the sets of objects, methods, etc. offered and required at the interfaces, together with the "usage" made of them.
- Support "openness" for tools so that companies can use different sets of tools which are still able to cooperate and even exchange models.
**A Short Manifest for Product Lines**
Paul Clements
SEI, Carnegie Mellon University, Pittsburgh, USA
1. Software architecture is key
2. Software product lines are an important emerging paradigm
**Reusing Efforts**
Marko Fabianke
GMD First, Berlin, Germany
Software engineering suffers much from the fact that developers are inventing the wheel over and over again. Reusing previously created artefacts and the effort that has been spend to create them is one of the central problems software engineering has to deal with. Research has addressed this problem by means of new development paradigms like component-based engineering, product family development or design patterns (to denote just a few). Although these paradigms are widely acknowledged within the research community, they do not just simply make their way to the market. Economical and organizational constraints usually prevent the software development industry from investing into their own future by means of adoption to new techniques and technologies. To convince them about the strategic superior of a new idea, one needs self-conciseness persuasiveness and strong arguments and this Dagstuhl seminar has helped a lot to equip the research community with it.
Evaluation Needs for Successful Software Product Line Engineering
Cristina Gacek
University of Newcastle upon Tyne, Newcastle-upon-Tyne, UK
Software product line engineering differs from “one-of-a-kind” software engineering by addressing a family of systems. Existing software engineering support does not necessarily fully support software product line engineering. The artefacts that are specific to software product line development are: reference requirements, domain models, decision models, reference architectures, and reference designs. The evaluation needs for each of these assets extend those present in “one-of-a-kind” software development.
As opposed to traditional software development, here contradicting requirements may exist. Hence, requirements evaluation approaches are needed that allow for the existence of controlled contradictions. Since reference requirements get represented in domain models, the same observation applies for their evaluation.
A decision model is an artefact that is exclusive to software product line development. Evaluation means are needed for evaluating the completeness and correctness of decision models and their relation to the other various assets.
When it comes to architectural quality requirements, reference architectures are very different from “one-of-a-kind”. Reference architectures are to support various instance architectures that in turn can have differing quality requirements imposed on them. The only current way of evaluating a reference architecture for these characteristics is to instantiate it and evaluate every single instance. An interesting area of research would be to define some means to support such evaluation without requiring an extensive instantiation effort. Given that architectures are high-level designs, these same comments hold for reference designs.
Reusable components are of extreme importance in product line environments, though they are not restricted to these environments and have been addressed within the software reuse community for a long time. Reusable components have been traditionally tested by running various simulations of the environments that they are expected to fit into. Having automated test suites is of great help here.
A Framework Depicting Variability
Martin Glinz
University of Zürich, Zürich, Switzerland
As an „informed outsider“ in the field of software product families, I profited very much from this workshop. My main contribution was to shape the results of a brainstorming session on the essence and the management of variability into a neat framework which I depict below.
---
1 Reusable components also exist in non-product line environments.
People involved in software product line (or domain) engineering development keep on complaining that they suffer from the lack of tools to support the creation and maintenance of generic product line assets and their instantiation for specific products. Experience from large product line projects like PRAISE, ESAPS and others tells that the support of product line engineering by such tools is very important.
On the other hand, experience from technology transfer shows that people in development organizations are really reluctant to change to new tools, instead they rather prefer to keep on using the tools they are used to. One of the reasons we hear for this attitude is that sometimes it is hard to transfer existing documentation from old tools to the new ones.
The introduction of product line development into a new organization forces the people working there into a different development paradigm (planning and developing for reuse and developing with reuse). Given the product line tools we wanted, these people would be expected to learn how to use a new set of tools at the same time on top of the new paradigm! Despite the benefits we expect from tools specifically supporting product line development, very likely we would encounter severe drawbacks as well.
Figure 1. A framework for understanding variability
Tool Support for Product Line Instantiation
Peter Knauber
Fraunhofer IESE, Kaiserslautern, Germany
Summarizing, we seem to end up with two contradicting requirements: product line engineers would like to have new, product line-specific tools, whereas people actually doing the development would like to stick to their existing ones. But there is an easy way to resolve this contradiction: Existing tools can be extended or combined in order to match product line needs as close as possible.
Two examples we have done at Fraunhofer IESE, may illustrate this:
A combination of the ARIS Toolsuite from IDS Scheer for business process modeling with an Excel spreadsheet enables the generic modeling and the instantiation of electronic shops from a respective product line of e-commerce applications. The Excel sheet serves as simple but working decision model to represent the functionality of different shop products.
An extension of Rational Rose using RoseScript allows to model generic class (and other) diagrams within Rose: optional, alternative, and product-specific classes with their interrelations can be shown or hidden from the model as needed.
Conclusion:
Instead of complaining about the lack of product line-specific tools product line scientists should develop workarounds based on existing tools and publish the result. This would on one hand lower the barrier to product line technology for interested organizations and on the other hand help institutions doing technology transfer to provide them with better solutions.
Using Separation of Concerns to Simplify Software Product Line Engineering
Charles W. Krueger
BigLever Software, Austin, USA
Published proposals and solutions for building software product lines rely on some of the most complex, resource intensive, capital intensive, and intellectually demanding software engineering practices developed to date. Examples include domain engineering, reverse engineering, rearchitecting, redesign, reimplementing, complex interacting software processes, system generators, reuse libraries, component assembly validation, and so forth. Because these activities often represent a fundamental shift to heavyweight technologies and complex processes, the time-frames for establishing software product line practices are typically measured in many person-years and many millions of dollars. Furthermore, because of the complexity and extended timeframes, the risk of failure is high. For most software engineering organizations, the complexity, cost, and perceived risk are a prohibitive barrier for implementing formal software product line practices.
Contrast this to the fundamental notion of software product line development. Software product line development is, in essence, developing software for a single system along with extensions to account for different, typically small, variations for nearly identical systems in the family. This begs the question, then, as to why the solutions for building software product lines aren’t as simple as (1) build a single software system, and then (2) build the collection of small variations. Why do we need a major shift to complex and heavyweight software engineering technologies, methods, processes, and techniques?
The answer is that, over the past several decades, we have developed formal tools and techniques for building single software systems (item #1 above), but we have no formal tools or techniques in our arsenal for building and managing a collection of small variations for a software product line (item #2 above). To compensate for this void, software engineers historically have relied on informally contrived solutions such as IFDEFs, configuration files, assembly scripts, install scripts, parallel
configuration management branches, and so forth. However, these informal solutions are not scalable. More recently, software product line research has focused some of software engineering’s most powerful and complex solutions to managing product line variation.
We believe there is a simpler way to fill this void in our arsenal for creating and managing the collection of variations in a software product line. Using one of computer science’s most powerful principles, separation of concerns, we have built a commercial product, BigLever Software GEARS, that manages the collection of variations in a soft-ware product line in conjunction with the existing tools and techniques for building single software systems. That is, we make software product line engineering a straightforward extension to single system engineering. The separation of concerns is applied so that the technology is independent of language, operating system, configuration management system, build system, and so forth. Furthermore, it does not depend on a domain modeling language, architecture language, or design language. We have adopted the adage that the right point of view saves 20 points of IQ. By extending the existing single system tool set with an independent formal technology focused on product line variation, we believe software organizations can achieve the order of magnitude benefits of software product lines with an order of magnitude less time and effort than is currently discussed. Rather than talk in timeframes of months and years, we think in terms of what can be accomplished the first day, week, or month.
BigLever Software GEARS is a software mass customization environment that focuses solely on the concern of managing the variation that exists in a software product line. Within that concern we identified four basic tasks that are necessary and sufficient for building and maintaining a product line: characterize the abstract dimensions of variation in product line, characterized where the individual product line members lie along those abstract dimensions, identify the locales of variation in the software realization, and characterize how the abstract dimensions of variation are instantiated at the locales of variation. Note that these four tasks are the basis of any software reuse technology. BigLever Software GEARS can be used for all three forms of software line development: proactive (planning ahead for predicted variation), reactive (responding to unpredicted variation), and extractive (reverse engineering variation from legacy software).
Evolution of Product Families
Julio Cesar Sampaio do Prado Leite
PUC-Rio de Janeiro
We understand that product families should be implemented as a component based software in order to facilitate the generation of different versions. As such, the reuse achieved by producers of software using the concept of product family is very much dependent on the span, level of abstraction, and granularity of the components repository.
We mean span as the degree of differences (conceptual distance) between different possible products. The differences may be either functional or non-functional and may require different approaches to the integration of components or development of new ones. Of course the span will dictate the investment on the task of generalizing the requirements process. Usually a wrong choice on the investment for the definition of the family (domain definition) will lead to a misfit of the overall architecture.
The level of abstraction is related on how the architecture will be organized, that is if we will have abstract components or if we will integrate low level components. The level of abstraction will also contribute to the overall flexibility of the architecture. Should we use the idea of domain languages as
in Draco, or should we use the concepts of frameworks or just rely on design patterns. Of course this is also a question of investment and will be a function of the span of a given family.
The aspect of granularity is related to the degree of specialization we will have in our family. Well-established families will have a very fine granularity on the basic components, which will allow more diversity on the integration of these components. Of course the level of granularity will vary in accordance with the span and the level of abstraction.
In order to have product families that will endure as an investment we will need methods, techniques and tools that could deal with span, level of abstraction and granularity in an evolving environment. That is after making a large investment on a software family, the software producer would like that this family could have a long life, that is that it could evolve.
We firmly believe that to guarantee evolution we need to have a solid policy on how to deal with change, how to manage different integration patterns and how to deal with the right balance of span, level of abstraction and granularity. In the pursuit for solutions for these problems, we foresee that a "requirements baseline" is mandatory. We understand "requirements baseline" as an up to date understanding of the family at the higher level of abstraction which should provide traces to the sources of information and to the various levels of abstraction used to integrate the lower level components.
Recent results on the study of the concept of "requirements baseline" for application oriented software make us believe that they can be transferred to the problem of product families. In our studies we have used scenarios as the basic representation for the "requirements baseline".
**Modelling System Families with Generic Architectures**
Jürgen Nehmer
University of Kaiserslautern, Kaiserslautern, Germany
A system family is a set of programs which obey certain rules describing commonalities and variabilities of a given system type - usually in a well defined application domain. One promising modelling approach for system families is the definition of a generic system architecture based on the notion of generic parameters associated with system artefacts.
Generic parameters, as opposed to computational parameters, control the shape of programs, e.g. they are used to set or change structural, algorithmic or other essential characteristics of a program like timing behaviour, efficiency or storage requirements. A program with generic parameters associated to it is called a *generic program*. Each complete set of values assigned to all generic parameters of a generic program defines an *instance* of that program. All different instances of a generic program define a *program family*.
If a generic parameter is set at run time we call that program *adaptive*, e.g. it has the ability to change its structure, algorithms timing behaviour etc. dynamically at run time. Unfortunately, computational and generic parameters passed at run time are not clearly separated from each other in present system designs.
Generic parameterization has been applied mostly at the code level. However, there is a potential for parameterization at different architectural levels which has not been investigated extensively so far. Below is a list of architectural levels where investigation of generic parameterization seems to pay off:
- code level
- interface level of components
- interaction level of components
- composition level
Software Product Line Practice Patterns
Linda Northrop
SEI, Carnegie Mellon University, Pittsburgh, USA
A software product line is a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way [Clements 00]. Many organizations are finding that software product lines can yield remarkable quantitative improvements in productivity, time to market, product quality, and customer satisfaction. There are many other organizations attracted to the idea but uncertain what is involved and how to proceed.
To help address the needs of such organizations the product line practice work of the Software Engineering Institute has for the past five years been focused on defining and documenting the essential activities and practice areas that are necessary for an organization to succeed with software product lines. The results of this work are continuously updated in a web-based document called A Framework for Software Product Line Practice [Clements 00]. In it we describe three essential activities, which are (1) the development of a set of core assets from which all the products will be built,(2) the development of products using those core assets, and (3) strong technical and organizational management that orchestrate the entire operation. Beneath the surface of these three activities are 29 practice areas that must be mastered. The practice areas are divided into categories of software engineering, technical management, and organizational management according to what type of skills are required to carry it out. For example, defining the architecture is a software engineering practice area; configuration control is a technical management practice area, and training is an organizational management practice area.
Though laying out all of the essential activities and practice areas has proven very helpful to organizations, it is still necessary for an organization to figure out how to put the practice areas into play. One approach is to follow a divide and conquer strategy that permits an organization to divide the product line effort into chunks of work to be done. Given that the organization can characterize its situation and the product line work to be done, it still needs to determine which practice areas relate to each chunk and how to assign responsibility to effect the work. In Software Product Lines: Practices and Patterns [Clements 2001] we propose software product line practice patterns to assist organizations in this process.
Patterns are a way of expressing common contexts and problem-solution pairs [Alexander 79]. For software product line practice patterns, the context is the organizational situation. The problem is what part of a product line effort needs to be accomplished. The solution is the grouping of practice areas and the relations among those practice areas (and/or groups if there is more than one) that together address the problem for that context.
We present each software product line practice pattern using the following template:
- **Name**: A unique and intuitive pattern name and a short summary of the pattern.
- **Example**: One or more scenarios to help illustrate the context and the problem.
- **Context**: The organizational situations in which the pattern may apply.
- **Problem**: What part of a product line effort needs to be accomplished.
- **Solution**: The basis for the practice area pattern grouping underlying the pattern.
- **Static**: The grouping that lists the practice areas each group.
- **Dynamics**: A table, diagram(s), or possibly scenario(s) describing the relations among the practice areas in each group and/or among the groups if there is more than one.
- **Application**: Any suggested guidelines for applying the pattern.
- **Variants**: A brief description of known variants or specializations of the pattern.
• **Consequences**: The benefits the pattern provides and also any known limitations.
The relations shown in the Dynamics section of each pattern are inherently iterative. The actual relations will vary depending upon the pattern. A relation between two practice areas might be “can be usefully practiced at about the same time as” or “produces artifacts or knowledge used by.” Often we will depict a relation as an arrow from one practice area to another, but the arrow will never mean a strictly linear completion sequence. So, practice area A → practice area B will never mean “do A and then when A is complete do B,” because in reality you will work on A and then B, and then B and then A, and so on. Interpret the arrows as denoting a shifting of active emphasis, but by no means exclusion.
---
**Product Lines – A Rich Source of Research Topics**
Henk Obbink
Philips Research Laboratories, Eindhoven, Netherlands
The production and evolution of product lines requires the integrated application of leading edge knowledge in at least the following areas:
- business and management
- (software) architecture
- (software) process
- (software) organization
Architecture is considered to be the central concept that is influenced by and influences the other factors. This mutual dependence is depicted in the figure below.
```plaintext
B P
/ \ /
A → O
```
Product Line architectural styles are “shaped” by these B, P and O forces next to capturing the essential commonality and variability.
Product Line Development Requires Close Co-Operation of Traditional Software Engineering Disciplines
Klaus Pohl
University of Essen, Essen, Germany
There are several factors that force software engineering to move from single product development towards product lines. Examples for such factors are mass customisation, evolution of technology and market economics.
The difference between single product development and product line development is the presence of variability in time and space. In principle, during domain engineering, variability is defined and designed into the product family whereas during application engineering the variability is exploited, i.e. variation points are bound to predefined variants to define a customer specific application which matches with the customer requirements.
Although difficult to manage variability is needed in order to support a product line. Which variability to include in a given product line is determined during product line scoping and reflects a trade-off decision between opposing arguments, the business viewpoint craving as much variability as possible versus the engineering viewpoint wanting as little variability as possible. Increasing the number of variability in a product line provides better market coverage, enables customisation, improves customer satisfaction, eases the marketing task, allows for product differentiation, and supports adaptability. On the other hand, variability increase complexity, increase up-front development costs, may imply performance penalties, adversely affect maintainability and testability for individual products, accelerate code decay, as well as require considerable effort for validating the outcome of domain engineering.
Defining, exploiting and managing variability requires a smooth interplay of various “disciplines” and, of course, adaptation of existing methods and tools.
This seminar has brought together people of the different software engineering disciplines and thereby enabled intensive cross-discipline discussions. The result of those discussions are new views on product line and variability and their aspects; mainly resulting from the synergy we could gain by combining the various achievements of the disciplines and by exchanging practical experiments.
Traceability Support for Product and Service Families
Bala Ramesh
George State University, Atlanta, USA
We examine the role of knowledge management in the design, customization, and delivery of electronically delivered services, specifically Internet-delivered e-services. We suggest that concepts underlying mass customisation can be applied to deliver individualized services over the Internet, while allowing the service provider to operate at mass production levels and simultaneously catering to the service needs of individual customers. We also propose how such knowledge management can be supported by traceability-based information technology systems.
Software reuse has been a dream in software development for a long time. Despite many theories and artefact repositories it has not been effective on a large scale. Most success stories come from very well understood and specifiable domains (e.g., statistical routines or other mathematical procedures). One of the root causes for failure is the fact that software reuse has long been viewed as a product rather than a process issue. Consequences of this product view were large numbers of repositories containing potentially reusable artefacts (mostly code). However, in a concrete project situation there was not enough information associated with those artefacts in order to make a ROI decision, i.e., answer the question whether it is better to reuse some artefact than develop it from scratch yourself. The normal human reaction to such a situation is to not accept the risk associated with reuse. The process view suggests that reuse candidates will only be chosen based on sufficient knowledge about reuse requirements in future projects (i.e. it requires a certain look-ahead ability) and based on information about the process of artefact adaptation. Sufficient knowledge about future projects enables the identification of requirements for any reuse candidate, which in turn can be used to characterize artefacts in a repository in a goal oriented, reuse supportive way. Under the assumption that most reuse candidates are not reused-as-is, but have to be adapted, the process of adaptation (e.g., parameterised, based on generation, by-hand) defines the format of artefacts in a repository as well as the cost needed for adaptation. If such a process oriented reuse approach is applied, reuse can become a reality. One of the best examples is published by the NASA SEL and others. It demonstrates that such an approach can result in tremendous reuse gains. In this concrete example reuse levels rose from traditional 30% to about 90%.
Software Product Family Development takes this process centered reuse approach to the next level by pro-actively planning reuse by initially anticipating commonalities among system variants to come. The idea of ‘producing’ all anticipated variants by distinguishing between their commonalities and differences is very popular in production technology. Today, the typical production line of a car manufacturer enables more than 10,000 variants of cars through the same assembly line. How far can we push this analogy in the human-based software development environment? It is clear that success of this approach depends on careful selection of domains that are small enough to have sufficient commonalities across all system variants and can be understood and modeled with reasonable up-front effort, but are also large enough to produce enough variants in order to capitalize the up-front investment. The existing approaches for product family development all distinguish between two kinds of processes: (a) the process of developing the domain specific artifacts at all levels from requirements to code, and (b) the process of instantiating these domain specific artifacts according to the specific requirements of a concrete system variant. Significant differences among existing approaches for product family development include (a) the support for identifying appropriate domains, and (b) the ordering of domain-specific and variant instantiation processes. Fraunhofer IESE has developed the incremental PuLSE approach which assumes that you can identify commonalities based on a limited number of existing or concretely planned systems instead of the entire domain. That means first systems can already benefit from reuse after a short initiation phase – even if it is not optimal. Optimization of reuse requires, therefore, incremental improvement of the domain knowledge as new systems are being developed. This partial ordering of domain specific and instantiation processes produces the optimal reuse potential after several evolutionary improvement cycles only, but makes product line development a realistic approach for many more domains and (especially small and mid-sized) companies.
Defining Product Line Development Processes for Multiple Stakeholders
Mike Stark
NASA Goddard Space Flight Center, Greenbelt, USA
This abstract describes research performed at NASA’s Goddard Space Flight Center (GSFC) into techniques that will provide specific guidance for defining the processes and products associated with a product line. In methods such as Synthesis, FAST, and PuLSE, we found that many of the process steps are identified, but that the details are left for a particular development team to fill in. This is appropriate, as these steps should depend on the particulars of an individual product line. However, it would be useful to have a systematic approach to completing these details, based on the characteristics of the product line.
This work is also motivated by a previous experience in developing a product line (although we didn’t use that terminology at the time) for the guidance, navigation, and control domain. This project, called Generalized Support Software (GSS), was intended to support flight dynamics ground systems for a series of future satellites. GSS exceeded expectations in all the goals that were set for the project. Unfortunately, the goals were focused strictly on cutting costs and cycle time. GSS was designed to run under a user interface that became obsolete during development, and was specified using an object-oriented notation. The specification notation was ideal for software developers, as there was a direct, easy and standard way of implementing from these specifications. However, the mathematicians who were responsible for specifying mission requirements for a given satellite found the notation unwieldy and confusing.
In examining current product line approaches, one can get lost in a diverse set of process steps and intermediate products. However, almost by definition, all product line approaches share two common elements. The first is that product lines involve both producing reusable assets (domain engineering) and consuming them to build applications (application engineering). The GSS mathematicians’ experiences motivate the importance of this producer-consumer dialog. The second common element is that using the product line is expected to save time and money by simplifying and streamlining the application engineering process, usually at the expense of added investment in domain engineering.
Given these considerations, I proposed the following approach to creating product line processes:
- Identify the key stakeholders in the application engineering process.
- Characterize the stakeholder perspectives by identifying their responsibilities within a project and the problem-solving approaches they use to carry out these responsibilities.
- Define domain engineering product templates and associated reading techniques that are consistent with how these application engineers do their work. These reading techniques are integrated into the application engineering process.
- Define domain engineering processes for creating these previously defined products. These processes must satisfy the perspectives associated with different domain engineering roles.
This approach is rooted in University of Maryland research into reading techniques, particularly Dr. Forrest Shull’s dissertation research. The hypothesis that led me to do this is that the application engineering process requires the reading of reusable assets. In working an example based on GSS to present at Dagstuhl, I saw that this was a good idea but not sufficient. The proposed approach does indeed address the consumer’s interest, but would not have identified object technology as the mechanism for achieving the high reuse that would enable the desired return on investment. I am currently revising the approach to balance stakeholder goals on the production and the consumption
side. The goal elicitation techniques used to generate benefit functions in PuLSE are one promising starting point for accomplishing this.
**Modeling Languages For Product Families: Method Engineering Approach**
Juha-Pekka Tolvanen
META-Case Consulting, Jyvaskyla, Finland
Current modelling languages are based on the concepts of programming languages, leading to a poor mapping to product family characteristics and difficulties in leveraging the benefits and efficiencies of product family development. Method engineering provides one solution. It suggests developing modelling languages that map to a specific domain: here to a family and its various characteristics. A modelling language (i.e. a metamodel) is defined based on the family characteristics. The metamodel sets the variation space for possible models of variants and provides the basis for generators.
**Development Issues in a Family of Medical Imaging Systems**
Frank van der Linden
Philips Medical Systems BV, Best, Netherlands
Introduction of product families (or also called "product lines") within software development is the only way to survive in the market. Only if variability is introduced in a systematic way, less effort may be needed for the development of new products.
The introduction of product families is non-trivial and a lot of issues still have to be sorted out. It seems to be the case that the technology to support variability is ready to be used, but the way to use it is not yet clear. Thus the question remains how to organize ourselves, and how to move to organization towards a state where a product family approach is used.
The position paper submitted for the Seminar with the above title dealt with a technical issue: Separating data from interfaces, in order to keep the interfaces stable and to ease evolution within the family. In order to make this work architects from the involved parts of the organization should be discussing this issue, in order to get enough commitment. New groups that enter the product family at later stage have to send a representative towards the weekly data model discussion group. Presently this group operates already for 3 years.
**Scenario based Product family development**
Josef Weingärtner
Siemens AG Medical Engineering, Erlangen, Germany
This position paper establishes some theses derived out of validation aspects within the medical domain about product family engineering (PFE). There are several very interesting aspects by
applying the PFE assets in an environment where conventional product engineering was done according to the V-model of software development. Perhaps not all aspects may be covered significantly during the workshop but short discussions upon each would be very appreciated by the author.
PFE is a promising approach for software development by satisfying the following requirements:
• short time to market by significant reuse – but do a short and efficient domain engineering
• better quality by reusing previously tested components – but only if change management aspects are strongly considered and modeled
• better change management and product family evolution by applying an architecture encompassing variability and planning for evolution
For many product families it will not be feasible to deliver different SW products as such - one single CD, encompassing all products of the family will be delivered representing each product of the family and at compilation time or installation time or run time the actual product will be "configured”.
• Thus, methods for configuration management in PFE have to be developed.
• New methods for derivation are the generative methods. To what extent are generative methods feasible? In what kinds of applications are they feasible?
Remark: is there any university where topics like configuration management or testing are thoroughly covered?
If you come to product family engineering you have the following additional aspects to consider:
• Design for reuse
• changes are more frequent and more complex
• considering the existing architecture and the components
The derived products have to be tested against the derived domain requirements and use cases / scenarios. Deriving test cases out of requirements, but also with focus out of use cases and scenarios. Idea to accompany the product derivation:
• Define high level scenarios describing working with (products of) the product family
• Refine them through the levels of product development
• Try to consider and model testing aspects
Problems:
• What kind of information is necessary on which level of domain engineering? What will be the impact if you miss it?
• How do you consider Change Management?
• How do you consider Traceability?
All these ideas should be worked out more precisely. Risks should be identified, ideas we might find in discussions that fit the goal should be elaborated more exactly. Also change management and traceability aspects should be discussed.
Business Cases For Product Line Engineering
David M. Weiss
Avaya Communication, Murray Hill, USA
We understand the technology for creating and using product lines reasonably well. However, we have few examples of well-constructed business cases for product-line engineering. We should be able to identify at least five convincing arguments, based on economic analyses, for the use of
product-line engineering. A convincing business case should have the property that it shows how the use of product-line engineering leads to fulfillment of the goals of those we ask to invest in the technology. For example, a product manager has different goals for a product line than does a software development manager, a software engineer, or a software quality assurance manager. For example, a product manager may be interested in gaining market dominance for his/her product line, whereas a software development manager may be interested in reducing the cost of producing the next version of his/her product.
We should be able to show the product manager a business case that demonstrates that applying software product-line engineering to the set of products that he/she is responsible for will directly help to achieve market dominance. The business case should contain a quantitative demonstration of the relationship between market dominance and product-line engineering, e.g., a graph such as shown in the figure, which has been constructed solely as an example, without the benefit of real data.

It is relatively easy to find qualitative arguments for why product-lines help to satisfy goals such as market dominance, but it is still an open issue how to create convincing quantitative analyses. At the moment we can only hypothesize the form of the quantitative relationship; we need to perform experiments and gather data to confirm or reject our hypotheses.
|
{"Source-Url": "http://www.dagstuhl.de/fileadmin/files/Reports/01/01161.pdf", "len_cl100k_base": 9564, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 38857, "total-output-tokens": 10607, "length": "2e13", "weborganizer": {"__label__adult": 0.00029087066650390625, "__label__art_design": 0.0003108978271484375, "__label__crime_law": 0.00024056434631347656, "__label__education_jobs": 0.0013437271118164062, "__label__entertainment": 4.1365623474121094e-05, "__label__fashion_beauty": 0.00013256072998046875, "__label__finance_business": 0.0002677440643310547, "__label__food_dining": 0.00024962425231933594, "__label__games": 0.00042724609375, "__label__hardware": 0.0005183219909667969, "__label__health": 0.0002849102020263672, "__label__history": 0.00018155574798583984, "__label__home_hobbies": 6.973743438720703e-05, "__label__industrial": 0.0002646446228027344, "__label__literature": 0.00016605854034423828, "__label__politics": 0.00017464160919189453, "__label__religion": 0.0002994537353515625, "__label__science_tech": 0.0038967132568359375, "__label__social_life": 8.022785186767578e-05, "__label__software": 0.004459381103515625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00023245811462402344, "__label__transportation": 0.0003330707550048828, "__label__travel": 0.00016748905181884766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52127, 0.01313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52127, 0.1958]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52127, 0.93883]], "google_gemma-3-12b-it_contains_pii": [[0, 2904, false], [2904, 3879, null], [3879, 5231, null], [5231, 8191, null], [8191, 10767, null], [10767, 13327, null], [13327, 16011, null], [16011, 17446, null], [17446, 21089, null], [21089, 24902, null], [24902, 28464, null], [28464, 32443, null], [32443, 34337, null], [34337, 37288, null], [37288, 41433, null], [41433, 45276, null], [45276, 47764, null], [47764, 50640, null], [50640, 52127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2904, true], [2904, 3879, null], [3879, 5231, null], [5231, 8191, null], [8191, 10767, null], [10767, 13327, null], [13327, 16011, null], [16011, 17446, null], [17446, 21089, null], [21089, 24902, null], [24902, 28464, null], [28464, 32443, null], [32443, 34337, null], [34337, 37288, null], [37288, 41433, null], [41433, 45276, null], [45276, 47764, null], [47764, 50640, null], [50640, 52127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52127, null]], "pdf_page_numbers": [[0, 2904, 1], [2904, 3879, 2], [3879, 5231, 3], [5231, 8191, 4], [8191, 10767, 5], [10767, 13327, 6], [13327, 16011, 7], [16011, 17446, 8], [17446, 21089, 9], [21089, 24902, 10], [24902, 28464, 11], [28464, 32443, 12], [32443, 34337, 13], [34337, 37288, 14], [37288, 41433, 15], [41433, 45276, 16], [45276, 47764, 17], [47764, 50640, 18], [50640, 52127, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52127, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
8f39be0121051dc8ef8311c19c9627175753a237
|
THE DASH PROJECT: AN OVERVIEW
David P. Anderson
Domenico Ferrari
Computer Science Division
Department of Electrical Engineering and Computer Science
University of California, Berkeley
Berkeley, California 94720
February 29, 1988
ABSTRACT
The DASH project at UC Berkeley is studying problems arising in the design of large, high-performance distributed systems, and is building an experimental system. The system's major design goals are centered in three areas: 1) IPC performance, 2) global architecture, and 3) local architecture. In each of these areas, vertically integrated mechanisms are used to achieve design goals, while an open system structure is maintained where possible. This report describes the motivation and principles of the DASH project, and sketches the current design of the DASH system.
1. INTRODUCTION
The name DASH\(^1\) refers to:
(1) A research project studying the design principles of future distributed systems.
(2) A new distributed system architecture embodying the results of this research.
(3) An operating system kernel implementing the distributed system architecture.
This report is an overview of all three of these aspects of DASH. We describe first the motivation and goals of the project, then the distributed architecture and the kernel. An earlier report, *Issues in the Design of Very Large Distributed Systems* [4], expands on the motivation and principles of DASH. The DASH system design is described in more detail in three companion reports: *The DASH Communication Architecture* [33], *The DASH Virtual Memory System* [35], and *The DASH Local System Architecture* [34].
1.1. Motivation, Assumptions and Goals
Much current research in operating systems is focused on *high-level mechanisms* such as distributed transactions, support for replicated data, object-oriented programming systems, user interfaces, and facilities for parallel distributed computation. *Low-level mechanisms* such as virtual memory, process control, kernel structure, naming, local IPC, and network communication have not kept pace with the progress in high-level mechanisms. Many of the current research projects are based on outdated operating systems (such as UNIX\(^2\)) and are crippled by the inappropriate low-level mechanisms provided by these systems.
The main research objective of DASH is the development of optimal low-level mechanisms for the next generation of distributed computer systems. We have taken the following steps towards this goal:
1) **Extrapolate trends in computer and communication technology into the middle-to-distant future (5-20 years).**
The dominant host type will be the workstation; many will be shared-memory multiprocessors with a small number (10-100) of processors [13, 15]. Communication networks based on fiber optics will allow low-delay (30 to 50 milliseconds coast-to-coast) and high-bandwidth (.01 to 1 Gigabit/second) communication between most pairs of hosts in the U.S., and eventually in the world [17, 26, 31].
2) **Propose functions and facilities of future computer systems based on this technology.**
We use the term *very large distributed system* (VLDS) [4] to refer to a hypothetical system running on the hardware base described earlier. A VLDS will link thousands or millions of hosts under diverse ownership. Its main function will be to provide secure, well-integrated access to logical *services*. These services might provide access to public databases (such as encyclopedias and archives), news media, sales, advertising, banking, interpersonal communication (mail, telephone, facsimile, and video conferencing), and entertainment (including distribution of audio
---
\(^1\) DASH was originally an acronym for Distribution, Autonomy, Security and Heterogeneity, attributes we viewed as desirable in an operating system. This list kept growing, and rather than lengthen the name we kept it and removed its acronym status.
\(^2\) UNIX is a trademark of Bell Laboratories.
and video).
The processing power of VLDS hosts (and perhaps of specialized compute servers) is another type of remotely-accessible resource. A VLDS must support load balancing and large-scale parallel computation; in the latter case, thousands or millions of processors might be involved in a single computation.
3) **Identify the basic system-level requirements of these functions and facilities.**
These requirements fall into three main groups: 1) IPC performance, 2) global system architecture, and 3) local system architecture. The groups are discussed in Sections 3, 4 and 5 respectively.
4) **Propose designs and mechanisms for satisfying these requirements.**
The DASH project is developing a design for a VLDS. Our current design is sketched in the remainder of this report, and is described in more detail in the companion reports ([33-35]).
5) **Study these mechanisms by implementing them and evaluating the resulting system.**
The DASH project is currently building an operating system kernel (the DASH kernel) that implements our distributed system design and will be used to evaluate and refine it. The kernel is being implemented on Sun 3 workstations, and will soon be ported to a Sequent Symmetry shared-memory multiprocessor.
In summary, the DASH project is building a foundation for very large distributed systems. Because of the synergy that may arise from combining communication, service access, and processing in a single unified system, VLDS design is an important direction in computer systems research. A VLDS will subsume the proposed functions of Integrated Services Digital Networks (ISDN) [14]. The use of a VLDS for high-performance computing will augment (and often replace) the use of specialized hardware [12], general-purpose parallel hardware [16], and supercomputers to address the processing requirements of graphics, artificial intelligence, simulation and scientific applications.
2. **DASH DESIGN PRINCIPLES**
The design of computer systems, and especially distributed systems, is often described in terms of *modules* i.e., logical components that interact only through abstract interfaces. Some of the modules are part of the protocol hierarchy; others are added by the local system architecture.
Likewise, the desired properties of the system are described by a set of *design goals*. In DASH, these properties fall mainly into the three areas mentioned earlier: IPC performance, global system architecture, and local system architecture. Each of these areas involves several of the modules, so we may represent the system design as a two-dimensional system of interactions, with modules on one axis and design goals on another (see Figure 1). We identify the following design principles arising from these multi-layer interactions:
*Vertical Integration of Mechanisms:*
Mechanisms must often span multiple system modules to achieve VLDS design goals. For example, high-performance IPC may be possible only by integrating mechanisms at the levels of virtual memory, process scheduling, and network
**system goal areas**
<table>
<thead>
<tr>
<th>global architecture</th>
<th>local architecture</th>
<th>IPC performance</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>local IPC</td>
</tr>
<tr>
<td></td>
<td></td>
<td>virtual memory</td>
</tr>
<tr>
<td></td>
<td></td>
<td>process scheduling</td>
</tr>
<tr>
<td></td>
<td></td>
<td>service access</td>
</tr>
<tr>
<td></td>
<td></td>
<td>naming</td>
</tr>
<tr>
<td></td>
<td></td>
<td>transport protocols</td>
</tr>
<tr>
<td></td>
<td></td>
<td>subtransport layer</td>
</tr>
<tr>
<td></td>
<td></td>
<td>network protocols</td>
</tr>
</tbody>
</table>
**Figure 1: Interactions Between Design Levels.**
communication.
**Open Architecture:**
A VLDS will encompass hosts with different hardware architectures, different application areas, and different computing paradigms. Therefore, the portions of the system that are *standardized* (i.e., those portions that a host must implement in order to take part in resource sharing) should be minimal. Basically, the standardized part of the system must provide naming, a way to move bytes securely and efficiently, and not much beyond that.
These two principles are not followed in some approaches to building distributed systems. For pragmatic reasons, most current systems are built on top of existing general-purpose protocol hierarchies (the V system [11] is a notable exception). Such hierarchies usually export a simple interface that precludes vertical integration. In addition, both principles dictate against building a VLDS as an extension of an existing centralized system (the approach of Mach [22]), or as an "interconnection" layer on top of existing
operating systems (as is done in Cronus [23]).
3. HIGH-PERFORMANCE REMOTE INTERPROCESS COMMUNICATION
A VLDS will provide a mechanism for clients to communicate with services. Some services (such as those based on digital audio and video) will require high-performance remote interprocess communication (IPC). The general-purpose communication mechanism of a VLDS must support such communication. This mechanism consists of several components:
1. The movement of data across networks.
2. Protocol flow control and reliability mechanisms.
3. The movement of data between device interfaces (including network interfaces) and main memory.
4. Scheduling, synchronization and processing time of the communicating processes, and of intermediate protocols.
5. The movement of data between virtual address spaces on a single host. Services may run in separate address spaces. Access to a local service requires data movement between two user spaces, and access to a remote service requires data movement between user and kernel spaces at both ends.
To maximize performance, the remote IPC mechanism of a VLDS must address each of the above components, and their interactions. In particular, the following goals can be identified:
- To reduce the overhead of host processing, currently the bottleneck in most IPC systems. In particular, to minimize software copying, and to avoid software encryption and checksumming where possible.
- To provide real-time performance guarantees. Scheduling of communication resources (transmission queues in hosts and switches, and processing in clients and protocols) must be done on the basis of real-time deadlines.
- To support configurable stream protocols. Request/reply communication is inadequate for many high-performance applications over long distances. In addition, the reliability, flow control, network capacity, and integrity functions of stream-oriented IPC must be separated, allowing clients to use only the functions they need.
- To provide a high-performance security mechanism that 1) allows the user to specify the level of security desired, and 2) allows the presence of secure hosts and secure local networks to be exploited.
- To accommodate a variety of network architectures.
3.1. The DASH IPC Design
3.1.1. Real-Time Message Streams
In existing distributed systems, the network-dependent communication interface typically provides a simple abstraction such as unreliable, insecure datagrams. Higher software layers use this facility to provide higher-level abstractions such as reliable request/reply message-passing [10], reliable secure typed message streams [22], or reliable byte
streams [20]. This approach simplifies the task of porting the system to different network
types. However, the simple nature of the basic abstraction (such as datagrams) does not
allow communication clients to express their performance, reliability and security needs,
or their workload parameters, to the communication provider. This makes it impossible
for the provider to use the most efficient mechanisms or to provide real-time performance
guarantees. It also makes congestion control in large networks difficult.
In an attempt to solve these problems, the DASH network communication system is
based on an abstraction called real-time message streams (RMS) [5]. An RMS is a sim-
plex communication channel between a sender and a receiver. Message boundaries are
preserved and messages are delivered in sequence. In addition, an RMS has various
parameters reflecting its performance and security properties. Specifically, it has the fol-
lowing Boolean parameters:
Authentication: if true, then impersonation (delivery of a message with incorrect
source label) is impossible.
Privacy: if true, then eavesdropping (access to a message by a host or process other
than that specified by the target label) is impossible.
An RMS has the following performance parameters:
Capacity: an upper bound (enforced by the sender) on the amount of data outstand-
ing within the RMS at any point (i.e., sent but not yet delivered).
Maximum message size: an upper bound (enforced by the sender) on the size of
individual messages.
Delay bound: message delay is the elapsed real time between the start of the send
operation and the moment of delivery. An RMS has an upper bound (guaranteed by
the RMS provider) on message delay. The components of the delay may include
network transmission delay, queueing and processing delays at the sender and at
intermediate switches, and processing at the receiver. This bound may be deter-
ministic, statistical, or best-effort.
Average bit error rate: this parameter reflects the combination of 1) the error rate
of the underlying transmission medium, and 2) the effectiveness of the checksum-
ing algorithm. It is guaranteed by the RMS provider.
Average loss rate: this reflects the expected rate of packet discarding from buffer
overrun and checksum failures.
An RMS creation request includes desired and acceptable parameter sets. The actual
parameters of the resulting RMS are returned to the client. These parameters must be
compatible with the request's acceptable parameters; the request is rejected if this is not
possible. The RMS provider tries to match the desired parameters as closely as possible.
3.1.2. The DASH Network Communication Structure
In DASH, the RMS abstraction appears in the interface to the network-dependent part,
and at higher levels of the system as well. RMS is the basis for a request/reply commu-
nication facility in which the RMS features serve to optimize request/reply performance.
The structure of the DASH network communication system is shown in Figure 2.
A DASH system can encompass many networks. Each network has a set of protocols for
implementing RMS between any two of its nodes. Note that, in this discussion, the term
network refers to an abstract entity, not necessarily to a physical network. For example, the DARPA Internet (with the addition of RMS support) and a local Ethernet could be separate DASH networks, although they might share the same host interfaces and network media. The DASH network layer encapsulates everything below the network RMS abstraction: network-specific protocols for establishing RMS's, routing protocols, address resolution protocols, and so on.
The subtransport (ST) layer provides authentication and privacy, caching and multiplexing of network-level RMS's, piggybacking of messages, and other services [33]. The ST protocol must be implemented by all DASH hosts.
The ST module on a host maintains a set of secure channels to other hosts. These secure channels may be implemented in different ways, depending on the security properties of the intervening network [7]. In general, they use data encryption or cryptographic checksums. On Ethernet-like networks, a more efficient scheme that avoids cryptographic checksumming of data can be used. If the network is assumed by both hosts to be physically secure and free of eavesdroppers, no encryption is used. For each secure channel, ST maintains lists of owners authenticated to and from the remote host (see Figure 3). This authentication uses public key encryption (PKE)-based certificates. It is done only the first time a user communicates with a particular remote host, thus reducing encryption overhead.
The transport layer consists of a set of protocols that use the ST facilities. One of these, the Remote Kernel Operation Mechanism (RKOM), is used for all request/reply communication, and is mandatory for all DASH hosts. The other transport protocols are
stream-oriented. They are implemented as separate processes that can be dynamically configured, in the style of Ritchie Streams [21]. These protocols provide functions such as reliability, RMS capacity enforcement, and flow control.
DASH allows the RMS abstraction to span processes. A subuser RMS spans protocol processes. Its delay bound includes protocol processing time. A user-level RMS spans user processes. Its delay bound includes end-process CPU time as well. In both cases, the enforcement of the delay bound uses deadline-based scheduling of protocol or user processes.
3.1.3. RMS Examples
To see the importance of RMS parameters, suppose that a client (say a transport protocol serving a user program) requires data privacy. The protocol requests an RMS from the subtransport layer. The desired and acceptable parameter sets both have the privacy flag set. Depending on the network, the following cases are possible:
(1) Privacy is provided by data encryption in the subtransport layer.
(2) The network has link-level encryption hardware; the subtransport layer learns this (it is a property of network-level RMS's) and does no data encryption.
(3) The network is considered secure, so no data encryption is done.
In any case, the RMS parameters allow the subtransport layer to use the optimal mechanism for privacy. If a client does not require privacy, no mechanism is used (which is again optimal). Without the RMS security parameters, this optimization would not be possible. The situation is similar for data integrity. Based on the values of RMS parameters, the optimal checksumming mechanism can be determined.
The following examples illustrate the uses of the RMS capacity and performance parameters:
- Initial request and reply messages in a request/reply protocol use an RMS with low delay bound. The RMS capacity may be large, unless it is known that request or reply messages will be small and infrequent.
- A stream protocol for bulk data transfer uses a high capacity, high delay RMS for data. Reliability acknowledgements uses low capacity, high delay RMS’s. Flow control acknowledgements uses a low delay, low capacity RMS.
- Digitized voice uses a high capacity, low delay RMS, perhaps with a statistical delay bound. A high bit error rate may be acceptable.
- Communication involving human user interface traffic (such as for network window systems [24]) can tolerate a moderate amount of delay because of human perceptual limitations. The RMS from user to application carries mouse and keyboard events, and can have low capacity; the RMS in the opposite direction carries graphic information, and requires higher capacity.
In all these cases the explicit specification of client needs increases the likelihood that the provider can accommodate them. For example, if packet queueing in an internetwork gateway is done using RMS-specified deadlines, then a low-delay packet can be sent before high-delay packets that would otherwise cause it to be delivered late. A network may be capable of providing low delay or high capacity, but not both. The RMS parameters allow the client to choose.
The use of RMS in DASH is based on anticipated needs and on projections of future network technology; the RMS abstraction is not supported on current networks, and cannot be built on top of simpler abstractions such as datagrams or virtual circuits. However, we feel that our approach is necessary for exploiting the advances in communication technology that will occur in the near- and long-term future.
3.2. Movement of Data Between Local Address Spaces
The efficiency of moving a large amount of data between virtual address spaces (both user spaces and kernel space) on a single machine is a major component of IPC performance. Software memory copying is the straightforward way to move data between spaces. However, memory bandwidth is improving at a slower rate than processor and network speeds. Thus, memory copying is likely to be an IPC bandwidth bottleneck and a major source of IPC delay. In addition, the bus traffic generated by memory copying will degrade system performance on shared-memory multiprocessors. Virtual memory (VM) remapping, as opposed to memory copying, is an attractive approach to moving data. However, remapping in shared-memory multiprocessors can be costly because of the problem of translation lookaside buffer (TLB) inconsistency.
The DASH mechanism for local data movement [32] eliminates many of the overheads that would otherwise arise from VM remapping in shared-memory multiprocessors. Put simply, we reduce the need for synchronous unmapping, and, when such remapping is necessary, we do it efficiently.
The DASH VM system has an IPC region that is shared (although with different levels of protection) among all address spaces. All data to be moved between spaces are placed in the IPC region. The local user IPC system involves the following layers (see Figure 4):
Figure 4: Logical Levels of Page Ownership and Mapping.
- A message-passing (MP) system providing operations to allocate, access, send, receive and deallocate messages. It is implemented by a user-level library that handles some operations itself and traps to the kernel for others.
- The facility for protected shared memory provided by the VM system's IPC region. This facility defines a notion of ownership of pages in the IPC region, and is used by the MP system for data movement. The facility can also be used directly by user processes via system calls (which are themselves implemented as MP operations).
- The Logical VM mapping interface of the VM system, whose operations include 1) mapping IPC pages on a single CPU, and 2) unmapping IPC pages on a set of CPU's. The unmapping operation (which on some architectures may require interprocessor interrupts) may be either synchronous or asynchronous. The latter type is slower but has less total overhead since multiple operations can be batched.
- The Physical VM mapping function of the VM system; this is the machine-dependent implementation of the logical VM mapping operations.
In the DASH message-passing system, the semantics of send() are that the sender's ownership of the IPC pages in the message is transferred to the receiver. The sender may
have multiple ownerships of a page; when the last ownership is relinquished, the IPC page is unmapped from the sender’s address space. The send() and receive() operations have parameters that can be used by the implementation to reduce work:
- The sender may specify whether it believes that the receiver trusts it. If so (and if the sender is correct), the unmap operation on send() can be deferred, and may never be done.
- The receiver may specify whether it trusts the sender or not (e.g., the kernel owner is always trusted). If the sender is not trusted, the receiver must wait until the operation of unmapping the page from the sender is completed (otherwise the sender could modify the data after it has been received). If the sender is trusted, it is not necessary to wait for a previous asynchronous unmap operation to complete.
- The receiver may specify whether to physically map in message pages immediately, or to map them in on demand. In the second case, a page is mapped in by the page fault handler when the receiver first accesses the page. No mapping is done if the page is not accessed. This optimization may be significant for applications that forward messages (e.g., a file service that receives a block from a disk device and sends it to the network without accessing it). Message forwarding is a common communication paradigm when services offered by the system execute at the user level.
The above optimizations can eliminate operations. When operations are necessary, our design allows them to be done efficiently. Synchronous unmapping is more expensive than asynchronous unmapping, and we avoid it when possible. For both types of unmapping operations, the VM system maintains a list of processors on which a page has been mapped, and only unmaps it from those processors.
3.3. DASH IPC: Summary
The DASH IPC design is vertical integrated in the following ways:
1. The RMS abstraction spans all levels (network and local) of the IPC system, making real-time communication between user processes possible.
2. The elimination of software copying in network communication involves a combination of message-passing semantics, protocols, kernel architecture, and virtual memory.
3. Although authentication and security mechanism are defined at a high level (see the discussion of global naming in Section 4), they are enforced at a low level of the protocol hierarchy. This can substantially reduce the expense of these functions [6].
The DASH IPC facility is also an open system:
1. The RMS interface allows different network types to make their nonstandard functions (e.g., encryption in hardware) visible to upper layers, and to implement RMS in a network-specific way.
2. The framework for stream protocols allows new protocols to be added to the kernel, and combined with other protocols, in a standard way. Clients can use protocols that provide the exact combination of functions (capacity enforcement, flow control, and reliability) that they need.
The DASH IPC design has numerous advantages over those of current distributed systems. In systems such as V [11] designed for local-area networks, IPC is often limited to reliable request/reply communication. This is inadequate for VLDS. In systems such as Mach [1], IPC is forced to pass through a user-level "network server", and this further limits potential performance. IPC performance can be hampered by the basic semantics. For example, the semantics of the UNIX write operation (shared by Mach and most other systems) are that the sender retains a (logical) copy of the data sent. In DASH, the semantics are that the sender loses the data from its space. This eliminates the work of creating a logical or physical copy of the data.
4. GLOBAL SYSTEM ARCHITECTURE
The global architecture of a distributed system is centered in its naming mechanisms. There often are naming mechanisms at multiple levels; they differ in the nature of the named entities, and the means of assigning and resolving names. Some of these mechanisms may also involve authentication and security.
The global architecture for a VLDS has the following requirements:
- There must be a global naming system on which security functions are based.
- The naming system must support organizational autonomy in the senses of 1) hierarchical delegation of authority for name assignment, and 2) lack of central trusted agents in name resolution.
- Naming must be source- and target-location independent. This reduces the location-dependence of program execution, thus simplifying large-scale distributed programming [2].
- The naming system must be scalable so that performance does not decline with increasing system size, even when remote references are frequent [29].
4.1. The DASH Global System Architecture
4.1.1. Naming and Authentication
DASH global names are symbolic pathnames in a single tree-structured name space. There are four types of named entities in the DASH global name space: hosts, owners, services, and name services. The internal nodes of the tree represent name services, and the leaves of the tree represent the other entity types (see Figure 5). The different entity types, and their associated attributes, are as follows:
- An owner is an individual human user or "role". Its attributes include two public keys: a user key and a kernel key.
- A host is a network-level communication endpoint. Its attributes include a list of its network addresses and the name of its owner.
- A service is a logical resource provided by set of programs or processes. Its attributes include 1) a list of (host name, instance ID) pairs, each specifying an instance of the service, and 2) the name of the owner of the service.
- A name service is a special type of service that manages the names of other entities. A name service maintains a single directory in which each entry has a name (a pathname component), a type, and a set of attributes.
The DASH service access mechanism allows services to extend the global name space below their own name. Hence they can provide global names for the objects they manage. For example, a file service might provide hierarchical naming of its files, so that a reference to /usa/ucl/cst/fs/anderson/foo might map to the file /anderson/foo within the file service /usa/ucl/cst/fs. This removes the need to distinguish the two levels of naming, and makes it possible for a service to serve as a manager of named "objects" or to provide logical "sub-services".
The DASH kernel maintains a cache of name resolutions. Even with this caching, the work involved in component-by-component resolution of long pathnames may be excessive, particularly if it must be done for frequently-accessed objects such as disk files. To avoid this problem, the DASH kernels allows user programs to obtain name tokens, each of which represents a pathname and its cached resolution. In further name references the client provides the token and a symbolic name extension; the kernel can begin resolution starting from the object represented by the token. (An analogous mechanism for references to names within services is described in the next section.)
4.1.2. Service Access
In the DASH distributed system architecture, services are a class of logical resources. A DASH service is a set of instances that together provide a logical resource. Each instance resides on a single host, and may consist of a process, a set of processes, or a "registration" with the host kernel that causes a process to be created as needed.
The intent of the DASH global architecture is that services, where possible, should be globally accessible. The service access facility described in this section makes remote
access, for both the client and server, as convenient as local access.
The DASH service access mechanism (SAM) allows clients to name and communicate with services in a uniform way. It provides:
**Replication transparency:** a client need not know which instance handles a particular request or session.
**Location transparency:** service names do not specify or limit the location of the servers.
**Failure transparency:** if a service instance fails, SAM may locate another instance of the service and connect the client to it automatically.
**Protocol flexibility:** services may provide interfaces that have real-time communication performance requirements, or that need special-purpose stream protocols.
A replicated service may provide a consistent data abstraction, in which case it needs to ensure data consistency between instances. DASH does not supply or dictate any method for this, or for ensuring the atomicity or permanence of operations on services. Such mechanisms must be supplied by the services themselves, perhaps in cooperation with a higher-level transaction manager.
Services can be accessed in two basic modes. In *Request/reply mode*, the operation is conveyed to the server via RKO; two reliability types (*maybe* and *exactly once*) are available. In *Session mode*, SAM locates an instance of the service and sets up a communication channel (or *bundle*) between the client and server.
A client may request a *service token* representing an object within the service. The token can thereafter be supplied in lieu of a name in operations on the service. A token has an associated set of operations, specified (by a bitmask) when the token is requested; the token provides the right to perform these operations on the object to which it refers, bypassing any underlying protection mechanism. A token may have no access rights, in which case it serves simply as a name abbreviation that can be used in forming other names.
The token scheme can improve performance in two ways: 1) it eliminates the need for the service to check authorization on every operation; 2) it eliminates the need for SAM and the service to do name translation on each operation.
Service tokens may be discarded at any time by a service. This may be done either to limit table size, or to force periodic reauthorization in support of an "eventual revocation" policy. The client (or the client kernel) must remember the name and operation set, and be prepared to issue another token request. A token does not represent a "session"; two tokens representing the same name and having the same rights are interchangeable. Tokens are usable only during a crash-free period for both the client and the service instance.
### 4.2. DASH Global Architecture: Summary
The DASH global architecture defines a structure for global naming of permanent entities (hosts, owners and services), and for local naming of temporary entities (service tokens). The use of service tokens and distributed name caching are vertically integrated mechanisms in the sense that they involve both distributed and local architectures. The DASH global architecture is an open system in that it facilitates the addition of new services by
users. At a lower level, it is an open system in that 1) it provides an open framework for stream-oriented service protocols; 2) it supports inter-service naming; 3) authentication is factored from authorization.
The DASH global architecture compares favorably with those of other systems as a basis for VLDS. The V system [11] is a global system architecture intended for small distributed systems. It is not scalable; for example, it requires broadcast for service location. The Xerox Grapevine system [25] is a nameserver with poor scalability because of its nonhierarchical design. Other large-scale system architectures have used hierarchical naming for scalability and autonomy [9,28]. These are designed for limited purposes, such as host naming, and are not integrated with other types naming (such as the naming of files). Efforts at large-scale integration of existing centralized systems are described in [30] and [23]. These projects, for the most part, address restricted problems or develop solutions based on technology that will soon be outdated. In contrast, the DASH project is taking a unified approach to VLDS design, and is seeking solutions that will not be made obsolete by foreseeable technology advances.
Some distributed systems use capabilities for low-level naming and protection. Examples include Amoeba [18], Mach [22], and Eden [3]. Typically, a symbolic naming (directory) service is built on top of the capability mechanism. Arguments against the use of capabilities as a basis for identifying permanent objects are given in [4].
5. LOCAL SYSTEM ARCHITECTURE
By the local system architecture of an OS kernel we mean 1) the user-level abstractions provided for process control, system calls, and so on; 2) the dynamic (process and interrupt) structure of the kernel, and 3) the software engineering (programming) structure of the kernel. VLDS local system architecture has the following requirements:
- Support for user-level services: efficient IPC and control transfer between user-level processes, and support for user-level caching.
- Support for parallelism on shared-memory multiprocessors, both in the facilities provided to user programs and in the implementation of the kernel itself.
- Incorporation of modern software engineering ideas; the kernel must be maintainable, extensible, and portable across a range of architectures.
5.1. The DASH Local System Architecture
The DASH kernel is structured as a set of processes that share a single address space, communicating via message-passing and synchronized operations on shared data objects. The kernel uses multiple processes wherever possible. Communication protocols, RKOM servers, device drivers, system calls and other kernel functions execute as separate processes that can run in parallel on a multiprocessor. Work is done in processes that might be done in hardware or software interrupt handlers on other systems.
The kernel provides user virtual address spaces, each occupied by zero or more user processes. Each user address space has a set of protected object references to objects in the kernel.
5.1.1. Message-Passing
Message-passing (MP) is used for many purposes in DASH:
1. Interprocess communication (IPC) between processes on a single host: user processes, communication protocols (which are implemented as processes), and other kernel-level processes.
2. Control transfer between address spaces. For example, system calls (during which a process switches from user to kernel space and back) use MP.
3. Allocation, buffering and queueing of data buffers: for example, network interface queues and virtual memory page pools.
4. Implementation of other synchronization mechanisms such as timers, multiple wait, semaphores, and read/write locks.
Because of this wide range of uses, the MP system must provide many semantic features, described below. The MP system allows clients to use (and pay for) exactly the features they need.
The MP system consists of two major parts:
- **Message representation**: A message is a logical array of bytes, implemented by a data structure consisting of a header and a set of not necessarily contiguous data areas. The interface to messages is a set of operations for creating, manipulating and accessing messages. Message headers contain space for parameters to message-passing operations.
- **MP operations** (some of which are described below). This part is extensible; instead of having a single object type (port, mailbox, and so on) as the target of MP operations, there are several such types. It is simple to add new types as the kernel is developed. The various MP object types provide a variety of message-passing operation semantics. These operations have four binary degrees of freedom, yielding 16 logical combinations; all are possible and potentially useful, but only a subset are currently supported in DASH.
**Stream vs. request/reply**: in stream mode, message flow is unidirectional, while in request/reply mode, message exchanges occur in synchronized pairs.
**Uniprocess vs. dual-process**: a stream-mode message, or the request message in a request/reply operation, may be processed in the context of the sending process, or by separate process.
**Mode of sender**: kernel processes invoke MP operations by making procedure calls. MP operations can also be invoked from user-level processes. The semantics are essentially the same as for kernel processes, but user-level MP operations are initiated via a trap instruction; the kernel trap handler completes the operation.
**Mode of receiver**: receive processing may be done in a different mode than that of the sender. For example, DASH system calls are implemented as uniprocess MP operations that are initiated in user mode and processed in kernel mode.
An MP object supports either stream mode or request/reply mode operations. Each mode is represented by a base class whose virtual functions are the generic operations on MP objects of that mode. Derived classes implement these functions. Clients, in general, do not have to know the exact type of MP object, only its base class. This object-oriented
MP system was motivated by the following considerations:
- **System calls**: these are message-passing operations directed to system call MP objects. By default, the system call object is uniprocess, and system calls are executed by the calling process in kernel mode (no process switch is done). By substituting a dual-process system-call object, system calls can be redirected to other processes (e.g., for debugging purposes) transparently to the user process.
- The DASH network communication architecture allows stream protocols to be dynamically configured. Each protocol is a process, and communication between protocols is via stream-mode MP objects, but the ST layer allows network messages to be sent by procedure call. Hence, if the MP system had been designed differently, it would have been necessary for a protocol process to know whether it was connected directly to the subtransport layer, or to an intervening protocol.
Some MP objects may provide additional features:
- Messages may convey scheduling deadlines between processes.
- Certain MP objects (of both modes) serve as the means of accessing a "pool of servers". It is often desirable to have multiple server processes, so that multiple requests can be executed in parallel. The optimal number of servers may not be known in advance. Such MP objects may use a feature called automatic receiver creation. If a message is sent to such an MP object and there is no process waiting to receive it, a process will automatically be created.
- Dual-process stream MP objects act as buffers between producer and consumer processes. Operations on such objects can be subjected to flow control. Flow control may be based either on the number of queued messages, or the amount of data in the queue. It is also possible to use hysteresis for both the sender and the receiver. This can reduce the number of context switches, since a process can handle a batch of messages in one context switch.
### 5.1.2. Kernel Memory Management
The DASH kernel dynamically creates objects. Pointers to these objects may be distributed throughout the kernel, so it is not always safe to deallocate their memory. This creates a potential problem of unbounded memory usage. This problem is dealt with in two ways.
First, the kernel executes in a virtual address space, part of which is pageable. The constructor for each object class specifies whether memory is to be allocated from the pageable or nonpageable part. Objects that are cache entries (such as objects in the name service cache) are kept in the pageable part. Therefore the VM page replacement scheme (e.g., an LRU approximation) is inherited by all kernel caches.
Second, certain types of objects (such as ports) have the properties that 1) many references to the object can exist, and it is not feasible to keep track of them; 2) the object can be deleted at any time, and this must be detected on any future reference to the object. In DASH, these objects are allocated using a pseudo-permanent object facility. Memory blocks allocated for such objects are preceded by a unique ID field that is cleared when the object is freed. A memory block allocated for this purpose can be reused for other pseudo-permanent objects, but not for other purposes. A reference to a pseudo-
permanent object consists of a pointer to the memory block and a UID value; if this value does not match, then the object has been deleted.
Together, these two techniques eliminate the need for general-purpose garbage collection in the DASH kernel.
5.1.3. Kernel Software Engineering
The DASH kernel is being implemented in an object-oriented language, C++ [27]. The kernel is structured as a set of classes (abstract types), of which some have multiple dynamically-created instances. In keeping with the principles of object-oriented programming, the class/object structure encapsulates design decisions and machine dependencies. Our intent is to produce a maintainable, extensible and portable kernel. We view the DASH project as an important test case in the application of object-oriented techniques to OS kernel implementation.
5.2. The DASH Local Architecture: Summary
The DASH local architecture is *vertically integrated* in several ways:
- The dynamic structure of the kernel is a set of processes that communicate through a versatile message-passing facility. The same structure is available at the user level, and the system call interface uses message-passing.
- The static structure of the kernel is a set of objects, some of which are pseudo-permanent. The same structure is presented at the user level using protected object references, and is presented remotely via remote object references.
The DASH local architecture provides considerable "openness". The kernel process model and the object-oriented message-passing system simplify the task of experimenting with kernel parallelism. In addition, several mechanisms that are entangled in the kernel of other systems are moved to the user level in DASH:
- Process control and exception handling.
- Current directory or other context mechanisms.
- Transaction management.
The themes of the DASH local architecture are related to those of many existing operating systems. Other systems [8, 19] use message-passing in their kernel implementation for increased parallelism. Message-passing for exceptions, system calls and process control is used in V [11].
6. CONCLUSION
The DASH project at UC Berkeley is studying the design principles of future distributed systems. Based on technological trends, we predict the development of very large distributed systems (VLDS) based on high-performance wide-area networks and providing global access to a variety of data and computing resources. We have made the following general conclusions:
(1) The requirements of the performance and flexibility of low-level mechanisms (virtual memory, process control, kernel structure, naming, local IPC, and network communication) in a VLDS are not met by the low-level mechanisms of current distributed systems. New designs and experimentation are needed in these areas.
(2) The optimal low-level mechanisms for a VLDS often involve 1) the vertical integration of components at different system levels, and 2) an open system design in which the standardized portions of the system are minimal.
At this point (February 1988) much of the DASH design is in place, and the implementation of the kernel is proceeding. We plan on completing the basic design, completing uniprocessor and multiprocessor implementations of the kernel, and evaluating the basic design. Once this is done, the DASH system can serve as a testbed for research and development in many new and unexplored areas of distributed systems, especially those involving real-time communication and very large-scale replication and distributed processing.
There are also many areas for research within the design areas discussed in this report:
- In the IPC area, several issues involving RMS remain to be investigated. How can RMS be implemented on current networks and internetworks? What are appropriate stream transport protocols? What new types of applications does RMS make possible? How can multicast be implemented on top of RMS? Others problems involve the VM-based data movement system; these will be explored when DASH is ported to shared-memory multiprocessors.
- In the global architecture, the performance of the global naming system must be investigated. Other problems include the design of protection mechanisms in a VLDS, the design of highly replicated data servers, and the formal analysis of trust relationships in naming.
- In the local architecture, research problems involve 1) multiprocessor synchronization (both the mechanisms themselves, and their integration in programming languages), 2) process control for remote debugging, 3) process scheduling for multiprocessors, and 4) support for caching in user-level services.
7. ACKNOWLEDGEMENT
We would like to thank the following people for their contributions to the DASH project: Brian Bershad, G.D. Giuseppe Facchetti, Kevin Fall, G. Scott Graham, Ellen Nelson, P. Venkat Rangan, Bruno Sartirana, Shin-Yuan Tzou, Raj Vaswani, and Robert Wahbe.
REFERENCES
|
{"Source-Url": "https://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/CSD-88-405.pdf", "len_cl100k_base": 9619, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 26289, "total-output-tokens": 12440, "length": "2e13", "weborganizer": {"__label__adult": 0.0003991127014160156, "__label__art_design": 0.0006456375122070312, "__label__crime_law": 0.00039124488830566406, "__label__education_jobs": 0.0014362335205078125, "__label__entertainment": 0.00014889240264892578, "__label__fashion_beauty": 0.0001995563507080078, "__label__finance_business": 0.00035572052001953125, "__label__food_dining": 0.00036978721618652344, "__label__games": 0.0006728172302246094, "__label__hardware": 0.006439208984375, "__label__health": 0.00063323974609375, "__label__history": 0.0005559921264648438, "__label__home_hobbies": 0.00013339519500732422, "__label__industrial": 0.0009064674377441406, "__label__literature": 0.0003421306610107422, "__label__politics": 0.0003349781036376953, "__label__religion": 0.0007228851318359375, "__label__science_tech": 0.4228515625, "__label__social_life": 0.00011736154556274414, "__label__software": 0.0160675048828125, "__label__software_dev": 0.544921875, "__label__sports_fitness": 0.0002677440643310547, "__label__transportation": 0.0009584426879882812, "__label__travel": 0.0002460479736328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53277, 0.01985]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53277, 0.55299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53277, 0.90186]], "google_gemma-3-12b-it_contains_pii": [[0, 815, false], [815, 815, null], [815, 3984, null], [3984, 7040, null], [7040, 8766, null], [8766, 11410, null], [11410, 14617, null], [14617, 16352, null], [16352, 18081, null], [18081, 21300, null], [21300, 22618, null], [22618, 25612, null], [25612, 28545, null], [28545, 30314, null], [30314, 33529, null], [33529, 36638, null], [36638, 39674, null], [39674, 42968, null], [42968, 45800, null], [45800, 47919, null], [47919, 50322, null], [50322, 52952, null], [52952, 53277, null]], "google_gemma-3-12b-it_is_public_document": [[0, 815, true], [815, 815, null], [815, 3984, null], [3984, 7040, null], [7040, 8766, null], [8766, 11410, null], [11410, 14617, null], [14617, 16352, null], [16352, 18081, null], [18081, 21300, null], [21300, 22618, null], [22618, 25612, null], [25612, 28545, null], [28545, 30314, null], [30314, 33529, null], [33529, 36638, null], [36638, 39674, null], [39674, 42968, null], [42968, 45800, null], [45800, 47919, null], [47919, 50322, null], [50322, 52952, null], [52952, 53277, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53277, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53277, null]], "pdf_page_numbers": [[0, 815, 1], [815, 815, 2], [815, 3984, 3], [3984, 7040, 4], [7040, 8766, 5], [8766, 11410, 6], [11410, 14617, 7], [14617, 16352, 8], [16352, 18081, 9], [18081, 21300, 10], [21300, 22618, 11], [22618, 25612, 12], [25612, 28545, 13], [28545, 30314, 14], [30314, 33529, 15], [33529, 36638, 16], [36638, 39674, 17], [39674, 42968, 18], [42968, 45800, 19], [45800, 47919, 20], [47919, 50322, 21], [50322, 52952, 22], [52952, 53277, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53277, 0.03472]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b344ce72376b2b3f43785e8617097fd5b8a49efe
|
Introduction
The following tutorial is intended to get you going quickly in circuit design in Verilog. It isn’t a comprehensive guide to System Verilog, but should contain everything you need to design circuits for your class.
If you have questions, or want to learn more about the language, I’d recommend Vahid and Lysecky’s Verilog for Digital Design.
Modules
The basic building block of Verilog is a module. This is similar to a function or procedure in C/C++/Java in that it performs a computation on the inputs to generate an output. However, a Verilog module really is a collection of logic gates, and each time you call a module you are creating that set of gates.
An example of a simple module:
```verilog
module AND_OR(andOut, orOut, A, B);
output logic andOut, orOut;
input logic A, B;
and TheAndGate (andOut, A, B);
or TheOrGate (orOut, A, B);
endmodule
```
We can analyze this line by line:
```
// Compute the logical AND and OR of inputs A and B.
module AND_OR(andOut, orOut, A, B);
output logic andOut, orOut;
input logic A, B;
and TheAndGate (andOut, A, B);
or TheOrGate (orOut, A, B);
endmodule
```
The first line is a comment, designated by the //. Everything on a line after a // is ignored. Comments can appear on separate lines, or at the end of lines of code.
output logic andOut, orOut;
input logic A, B;
The top of a module gives the name of the module (AND_OR in this case), and the list of signals connected to that module. The subsequent lines indicate that the first two binary values (andOut and orOut) are generated by this module, and are output from it, while the next two (A, B) are inputs to the module.
and TheAndGate (andOut, A, B);
or TheOrGate (orOut, A, B);
This creates two gates: An AND gate, called “TheAndGate”, with output andOut, and inputs A and B; An OR gate, called “TheOrGate”, with output orOut, and inputs A and B. The format for creating or “instantiating” these gates is explained below.
dendmodule
All modules must end with an endmodule statement.
**Basic Gates**
Simple modules can be built from several different types of gates:
buf <name> (OUT1, IN1); // Sets output equal to input
not <name> (OUT1, IN1); // Sets output to opposite of input
The <name> can be whatever you want, but start with a letter, and consist of letters, numbers, and the underscore “_”. Avoid keywords from Verilog (i.e. “module”, “output”, etc.).
There are multi-input gates as well, which can each take two or more inputs:
and <name> (OUT, IN1, IN2); // Sets output to AND of inputs
or <name> (OUT, IN1, IN2); // Sets output to OR of inputs
nand <name> (OUT, IN1, IN2); // Sets to NAND of inputs
nor <name> (OUT, IN1, IN2); // Sets to NOR of inputs
xor <name> (OUT, IN1, IN2); // Sets output to XOR of inputs
xnor <name> (OUT, IN1, IN2); // Sets to XNOR of inputs
If you want to have more than two inputs to a multi-input gate, simply add more. For example, this is a five-input and gate:
and <name> (OUT, IN1, IN2, IN3, IN4, IN5); // 5-input AND
**Hierarchy**
Just like we build up a complex software program by having procedures call subprocedures, Verilog builds up complex circuits from modules that call submodules. For example, we can take our previous AND_OR module, and use it to build a NAND_NOR:
// Compute the logical AND and OR of inputs A and B.
module AND_OR(andOut, orOut, A, B);
output logic andOut, orOut;
input logic A, B;
and TheAndGate (andOut, A, B);
or TheOrGate (orOut, A, B);
endmodule
// Compute the logical NAND and NOR of inputs X and Y.
module NAND_NOR(nandOut, norOut, X, Y);
output logic nandOut, norOut;
input logic X, Y;
logic andVal, orVal;
AND_OR aoSubmodule (.andOut(andVal), .orOut(orVal),
.A(X), .B(Y));
not n1 (nandOut, andVal);
not n2 (norOut, orVal);
endmodule
Notice that in the NAND_NOR procedure, we now use the AND_OR module as a gate
just like the standard Verilog “and”, “not”, and other gates. That is, we list the module’s
name, what we will call it in this procedure (“aoSubmodule”), and the outputs and inputs:
AND_OR aoSubmodule (.andOut(andVal), .orOut(orVal),
.A(X), .B(Y));
Note that unlike C/C++/Java where we use the order of parameters to indicate which
caller values connect to which submodule ports, in Verilog we explicitly name the ports.
That is, when we say:
.andOut(andVal)
We mean that the “andVal” wires in the caller module are connected to the “andOut” wires in the called submodule. This explicit naming tends to avoid mistakes, especially when someone adds or deletes ports inside the submodule. Note that every signal name in each module is distinct. That is, the same name can be used in different modules independently. In fact, if the caller module wants to hook a wire to a port of a submodule with the same name, there’s a shorthand for that. For example, if we had the call:
```
AND_OR aoSubmodule (.andOut(andOut), .orOut(orVal),
.A(A), .B(B));
```
We could write that alternatively as:
```
AND_OR aoSubmodule (.andOut, .orOut(orVal), .A, .B);
```
This hooks andOut in the caller to andOut of the submodule, as well as A to A and B to B.
Just as we had more than one not gate in the NAND_NOR module, you can also call the same submodule more than once. So, we could add another AND_OR gate to the NAND_NOR module if we chose to—we simply have to give it a different name (like “n1” and “n2” on the not gates). Each call to the submodule creates new gates, so three calls to AND_OR (which creates an AND gate and an OR gate in each call) would create a total of 2*3 = 6 gates.
One new statement in this module is the “logic” statement:
```
logic andVal, orVal;
```
This creates what are essentially local variables in a module. In this case, these are actual wires that carry the signals from the output of the AND_OR gate to the inverters.
Note that we chose to put the not gates below the AND_OR in this procedure. The order actually doesn’t matter—the calls to the modules hooks gates together, and the order they “compute” in doesn’t depend at all on their placement order in the code— all execute in parallel anyway. Thus, we could swap the order of the “not” and “AND_OR” lines in the module freely.
**Boolean Equations and “Assign”**
You can also write out Boolean equations in Verilog within an “assign” statement, which sets a “logic” variable to the result of a Boolean equation. Or is “|”, and is “&”, negation is “~”, xor is “^”. For example, we can compute not((A and B) or (C and D)) by:
```
assign F = ~((A & B) | (C & D));
```
**True and False**
Sometimes you want to force a value to true or false. We can do that with the numbers “0” = false, and “1” = true. For example, if we wanted to compute the AND_OR of false and some signal “foo”, we could do the following:
```
AND_OR aoSubmodule (.andOut(andVal), .orOut(orVal),
.A(0), .B(foo));
```
**Delays**
Normally Verilog statements are assumed to execute instantaneously. However, Verilog does support some notion of delay. Specifically, we can say how long the basic gates in a circuit take to execute with the `#` operator. For example:
```verilog
// Compute the logical AND and OR of inputs A and B.
module AND_OR(andOut, orOut, A, B);
output andOut, orOut;
input A, B;
and #5 TheAndGate (andOut, A, B);
or #10 TheOrGate (orOut, A, B);
endmodule
```
This says that the `and` gate takes 5 “time units” to compute, while the `or` gate is twice as slow, taking 10 “time units”. Note that the units of time can be whatever you want – as long as you put in consistent numbers.
**Defining constants**
Sometimes you want to have named constants - variables whose value you set in one place and use throughout a piece of code. For example, setting the delay of all units in a module can be useful. We do that as follows:
```verilog
// Compute the logical AND and OR of inputs A and B.
module AND_OR(andOut, orOut, A, B);
output logic andOut, orOut;
input logic A, B;
parameter delay = 5;
and #delay TheAndGate (andOut, A, B);
or #delay TheOrGate (orOut, A, B);
endmodule
```
This sets the delay of both gates to the value of “delay”, which in this case is 5 time units. If we wanted to speed up both gates, we could change the value in the parameter line to 2.
**Parameterized Design**
Parameters can also be inputs to designs, that allow the caller of the module to set the size of features of that specific instance of the module. So, if we have a module such as:
```verilog
module adder #(parameter WIDTH=5) (out, a, b);
output logic [WIDTH-1:0] out;
input logic [WIDTH-1:0] a, b;
assign out = a + b;
endmodule
```
This defines a parameter “WIDTH” with a default value of 5 – any instantiation of the adder module that does not specify a width will have all of the internal variable widths set to 5. However, we can also instantiate other widths as well:
// A 16-bit adder
adder #( .WIDTH(16) ) add1 (.out(o1), .a(a1), .b(b1));
// A default-width adder, so 5-bit
adder add2 (.out(o2), .a(a2), .b(b2));
**Test benches**
Once a circuit is designed, you need some way to test it. For example, we’d like to see how the NAND_NOR circuit we designed earlier behaves. To do this, we create a test bench. A test bench is a module that calls your device under test (DUT) with the desired input patterns, and collects the results. For example consider the following:
// Compute the logical AND and OR of inputs A and B.
module AND_OR(andOut, orOut, A, B);
output logic andOut, orOut;
input logic A, B;
and TheAndGate (andOut, A, B);
or TheOrGate (orOut, A, B);
endmodule
// Compute the logical NAND and NOR of inputs X and Y.
module NAND_NOR(nandOut, norOut, X, Y);
output logic nandOut, norOut;
input logic X, Y;
logic andVal, orVal;
AND_OR aoSubmodule (.andOut(andOut), .orOut(orVal),
.A(X), .B(Y));
not n1 (nandOut, andVal);
not n2 (norOut, orVal);
endmodule
module NAND_NOR_testbench; // No ports!
logic X, Y;
logic nandOut, norOut;
initial begin // Stimulus
X = 1; Y = 1; #10;
X = 0; #10;
Y = 0; #10;
X = 1; #10;
end
NAND_NOR dut (.nandOut, .norOut, .X, .Y);
endmodule
The code to notice is that of the module “NAND_NOR_testbench”. It instantiates one copy of the NAND_NOR gate, called “dut” (device under test), and hooks up “logic” signals to all of the I/Os.
In order to provide test data to the dut, we have a stimulus block:
initial begin // Stimulus
X = 1; Y = 1; #10;
X = 0; #10;
Y = 0; #10;
X = 1; #10;
end
The code inside the “initial” statement is only executed once. It first sets X and Y to true. Then, due to the “#10” the system waits 10 time units, keeping X and Y at the assigned values. We then set X to false. Since Y wasn’t changed, it remains at true. Again we wait 10 time units, and then we change Y to false (X remains at false). If we consider the entire block, the inputs XY go through the pattern 11 -> 01 -> 00 -> 10, which tests all input combinations for this circuit. Other orders are also possible. For example we could have done:
initial begin // Stimulus
X = 0; Y = 0; #10;
Y = 1; #10;
X = 1; Y = 0; #10;
Y = 1; #10;
end
This goes through the pattern 00 -> 01 -> 10 -> 11. In fact, there’s a shorthand for doing this format:
integer i;
initial begin // Stimulus
for(i=0; i<4; i++) begin
{X,Y} = i; #10;
end
We use the fact that integers are encoded in binary, and the binary values go through the pattern 000, 001, 010, 011, 100, 101, … If you want to put N binary signals through all combinations of inputs, then use the same code, but replace the upper limit of 4 with the integer whose value is $2^N$.
**Printing values to the console**
Most development will use the waveforms in the simulator to show the state of wires over time. However, sometimes in debugging it is useful to print messages as well – for example, any time an error condition is found, you may want to print a text message saying what the error is, and print the value of some variables. The $display command does this. Specifically, if we did:
```verbatim
initial begin // Stimulus
#1000;
if (err != 0)
$display($time, , “Found an error, code: “, err);
end
```
$display fires once, at the time specified. If you would like to be alerted whenever a value changes, use $monitor:
```verbatim
initial // Response
$monitor($time, , SEL, I, J, , V);
```
This code prints whenever any of the variables being monitored changes.
**Register Transfer Level (RTL) Code**
In the earlier sections, we showed ways to do structural designs, where we tell the system exactly how to perform the design. In RTL code we instead state what we want, and allow the Verilog system to automatically determine how to do the computation.
*Note: in RTL it is easy to forget how the hardware actually works, and pretend it’s just C or Java. This is a great way to design AWFUL hardware. Because of this, we will give you stylized ways of using the constructs, which will guide you towards better versions. Our introductory logic class spends a lot of time showing what hardware is generated from various Verilog structures – think about the hardware your Verilog actually requires!*
In each of these cases, RTL code is done in an “always” block. If the computation is combination, it should be in an “always_comb” block, while computations that remember things, and thus are state-holding, should be in an “always_ff”.
**Begin-end**
Begin and end statements merge together multiple statements into one, like the “{ }” braces in C and Java. For statements below such as “if-then-else” and “case”, you can use begin and end to merge together multiple statements.
**If-then-else**
```verbatim
logic V1, V2;
```
always_comb begin
if (A == 1) begin
V1 = 1;
V2 = 0;
end else if (A == 0 & B == 1) begin
V1 = 1;
V2 = 1;
end else begin
V1 = 0;
V2 = 0;
end
end
You can set the output of a wire based upon the value of other signals. The if-then-else is similar to software programming languages. Note however that you should make sure that all signals are defined in all cases (i.e. it would be a problem to delete either V1 or V2 from any of these clauses).
If you think through this code, it is equivalent to a logic function. For example, V1 is true only when A == 1, or when A == 0 and B == 1. This is equivalent to $V1 = A + \neg(A) \cdot B = A + B$. Similarly $V2 = \neg(A) \cdot B$.
**case**
As we move to multi-bit signals, that can take on values more than just 0 and 1, the case statement becomes quite useful. The variable to be considered is placed in the “case ()” statement, and then different values are listed, with the associated action. For example, in the code below when the “state” variable is equal to 0, HEX is set to 0, while if the “state” variable is equal to 1, HEX is set to 1, and so on. There must also always be a “default” case, which is used when no other case matches. Also, like the if-then-else statement, any variable set in any part of the case statement should be set in all states. That is, dropping HEX from any of the “state” value lines would be incorrect.
In this code we use “1’bX” to indicate a 1-bit binary don’t care in the default case, allowing the Verilog system to use Don’t Cares in the minimization.
logic HEX;
always_comb begin
case (state)
Id 0: HEX = 0;
Id 1: HEX = 1;
Id 2: HEX = 1;
Id 3: HEX = 0;
Id 4: HEX = 1;
default: HEX = 1'bX;
endcase
end
**Sequential Logic**
In combinational logic we start an always block with “always_comb”, which means the logic output is recomputed every time any of the inputs changes. For sequential logic, we need to introduce a clock, which will require a somewhat different always statement:
```verbatim
// D flip-flop w/synchronous reset
module D_FF (q, d, reset, clk);
```
output logic q;
input logic d, reset, clk;
always_ff @(posedge clk) begin // Hold val until clock edge
if (reset)
q <= 0; // On reset, set to 0
else
q <= d; // Otherwise out = d
end
endmodule
Most of this should be familiar. The new part is the “always_ff @(posedge clk)”. We capture the input with the “always_ff @(posedge clk)”, which says to only execute the following statements at the instant you see a positive edge of the clk. That means we have a positive edge-triggered flip-flop. We can build a negative edge-triggered flip-flop via “always_ff @(negedge clk)”.
Assignment styles: = vs. <=
Verilog includes two different types of ways to assign values to variables: = vs. <=. They are subtly different:
- = The assignment occurs immediately. A subsequent line will see the new value. So, a = b; c = a; will set both a and c to the value contained in b.
- <= The value to be assigned is computed immediately, but the assignment itself is delayed until all simultaneously executing code has been evaluated. So, a<=b; b<=a; will swap the values of a and b. However, a<=b; c<= a; will set a to the value of b, and c to the old value of a (the value before the value of b is written to a).
The two kinds of assignment can be confusing, and mixing them in one “always” block is a recipe for disaster. However, things are much easier if you obey the following rules:
1. Inside always_ff @(posedge clk) blocks use <= for everything (except the iteration variable in a for loop).
2. For assign statements and always_comb blocks use = for everything.
3. Avoid complex logic in always_ff @(posedge clk) blocks – instead, compute complex logic in always_comb blocks, and then just have statements like “ps <= ns” in always_ff @(posedge clk) blocks.
Clocks
A sequential circuit will need a clock. We can make the test bench provide it with the following code:
```verilog
logic clk;
parameter PERIOD = 100; // period = length of clock
// Make the clock LONG to test
initial begin
clk <= 0;
forever #(PERIOD/2) clk = ~clk;
```
This code would be put into the testbench code for your system, and all modules that are sequential will take the clock as an input.
**Declaring Multi-bit Signals**
So far we have seen “logic” statements that create single-bit signals (i.e. they are just 0 or 1). Often you’d like to represent multi-bit values (for example, a 3-bit variable that can represent values 0..7). We can do this type of operation with the following declarations:
```verilog
logic [2:0] foo; // a 3-bit signal (a bus)
logic [15:0] bar; // a 16-bit signal
```
These statements set up a set of individual wires, which can also be treated as a group. For example, the “logic [2:0] foo;” declares a 3-bit signal, which has the MSB (the 2^2’s place) as foo[2], the LSB (the 2^0’s place) as foo[0], and a middle bit of foo[1].
The individual signals can be used just like any other binary value in Verilog. For example, we could do:
```verilog
and a1(foo[2], foo[0], c);
```
This AND’s together c and the 1’s place of foo, and puts the result in the 4’s place of foo.
Multi-bit signals can also be passed together to a module:
```verilog
module random(bus1, bus2);
output logic [31:0] bus1;
input logic [19:0] bus2;
logic c;
another_random ar1(.c, .bus2, .bus1);
endmodule
```
This module connects to two multi-bit signals (32 and 20 bits respectively), and passes both of them to another module “another_random”, which also connects to a single-bit wire c.
**Multi-bit Signals – Common Error**
When you are declaring multi-bit signals, you may get a warning message like:
```
"Warning! Port sizes differ in port connection (port 2) [Verilog-PCDPC]"
```
look for something like the following in your code:
```verilog
input logic [31:0] d, reset, clk;
```
What that line does is declare 3 32-bit values. That is, d is [31:0] AND reset is [31:0] AND clk is [31:0]
What you actually want is:
```verilog
input logic [31:0] d;
input logic reset, clk;
```
Which declares d to be a 32-bit value, and reset and clk are 1-bit values.
**Multi-bit Constants**
In test benches and other places, you may want to assign a value to a multi-bit signal. You can do this in several ways, shown in the following code:
```verilog
logic [15:0] test;
initial begin // stimulus
test = 12;
#(10) test = 16'h1f;
#(10) test = 16'b01101;
end
```
The 16-bit variable `test` is assigned three different values. The first is in decimal, and represents twelve. The second is a hexadecimal number (specified by the 'h) 1f, or $16 + 15 = 31$. The last is a binary number (specified by the 'b) 01101 = $1 + 4 + 8 = 13$. In each case the value is assigned, in the equivalent binary, to the variable `test`. Unspecified bits are padded to 0. So, the line:
```verilog
test = 12;
```
is equivalent to:
```verilog
test = 'b0000000000001100;
```
It sets `test[2]` and `test[3]` to 1, and all other bits to 0.
**For Loops for Multi-bit Signals**
Sometimes when we need to reorganize signals in a bus, a FOR loop can be helpful, particularly with mathematical calculations for the indexes.
```verilog
logic [7:0] LEDG;
integer i;
always_comb begin
for (i=0; i<8; i=i+1) LEDG[7-i] = GPIO_0[28+i];
for (i=0; i<10; i=i+1) LEDR[9-i] = GPIO_0[18+i];
end
```
In this code we set LEDG[7] = GPIO_0[28], LEDG[6] = GPIO_0[29], etc.
**Multi-Dimensional Buses**
Sometimes it can be useful to have structures with more than one dimension – for example, we might want to hold 16 8-bit values. Verilog allows you to define multiple sets of indexes for a variable:
```verilog
logic [15:0][7:0] string;
```
To index a value, you move left-to-right through the indices. For example, the following code sets all the bits of a 4-dimensions bus to 0:
```verilog
logic [15:0][9:0][7:0][3:0] vals;
integer i, j, k, l;
always_comb begin
for(i=0; i <= 15; i++)
for(j=0; j<=9; j++)
for(k=0; k<=7; k++)
for(l=0; l<=3; l++)
vals[i][j][k][l] = 0;
end
```
for(k=0; k<=7; k++)
for(l=0; l<=3; l++)
vals[i][j][k][l] = 1'b0;
Subsets
Sometimes you want to break apart multi-bit values. We can do that by selecting a subset of a value. For example, if we have
logic [31:0] foo;
initial foo[3:1] = 3'b101;
This would set foo[3] = 1, foo[2] = 0, and foo[1] = 1. All other bits of foo will not be touched. We could also use the same form to take a subset of a multi-bit wire and pass it as an input to another module.
Note that this subdividing can be done to save you work in creating large, repetitive structures. For example, consider the definition of a simple 16-bit register built from a base D_FF unit:
module D_FF16(q, d, clk);
output logic [15:0] q;
input logic [15:0] d;
input logic clk;
D_FF d0 (.q(q[0]), .d(d[0]), .clk);
D_FF d1 (.q(q[1]), .d(d[1]), .clk);
...
D_FF d15 (.q(q[15]), .d(d[15]), .clk);
endmodule
with the 16 separate D_FF lines there’s a good likelihood you’ll make a mistake somewhere. For a 32-bit register it’s almost guaranteed. We can do it a bit more safely by repeatedly breaking down the problem into pieces. For example, write a 4-bit register, and use it to build the 16-bit register:
module D_FF4(q, d, clk);
output logic [3:0] q;
input logic [3:0] d;
input logic clk;
D_FF d0 (.q(q[0]), .d(d[0]), .clk);
D_FF d1 (.q(q[1]), .d(d[1]), .clk);
D_FF d2 (.q(q[2]), .d(d[2]), .clk);
D_FF d3 (.q(q[3]), .d(d[3]), .clk);
endmodule
module D_FF16(q, d, clk);
output logic [15:0] q;
input logic [15:0] d;
input logic clk;
D_FF4 d0( .q(q[3:0]), .d(d[3:0]), .clk);
D_FF4 d1( .q(q[7:4]), .d(d[7:4]), .clk);
D_FF4 d2(.q(q[11:8]), .d(d[11:8]), .clk);
D_FF4 d3(.q(q[15:12]), .d(d[15:12]), .clk);
endmodule
**Concatenations**
Sometimes instead of breaking apart a bus into pieces, you instead want to group things together. Anything inside { }'s gets grouped together. For example, if we want to swap the low and high 8 bits of an input to a D_FF16 we could do:
```
logic [15:0] data, result;
D_FF16 d1(.q(result), .d({data[7:0], data[15:8]}), .clk);
```
Pretty much anything can go into the concatenation – constants, subsets, buses, single wires, etc.
**Bit Replication in Concatenations**
Sometimes you would like to copy a bit multiple times in a concatenate (very useful for sign-extension in 2's Complement numbers). You can do that with the following construct:
```
logic [15:0] large;
logic [7:0] small;
assign large = {{8{small[7]}, small};
```
Here, the first bit is copied 8 times, then the entire number appears. This means you will have 9 instances of the top bit, followed by one of each of the next bits.
**Enumerations**
For FSMs and the like, we want to have variables that can take on one of multiple named values – while we could just use numbers, names are easier to use. Sometimes we may use PARAMETER statements to set up names for variables, but for FSM state variables enumerations work better. For example, the following code defines the allowable states for an FSM:
```
enum { RED, BLUE, GREEN} ps, ns;
```
This defines two variables, ns and ps, and requires that their values be either RED, GREEN, or BLUE. We can test the value of variables, and assign new values, using those names:
```
always_comb begin
if (ps == RED)
ns = BLUE;
else
...
Verilog will then assign specific values to each of these variables. If you want to set specific values, you can use:
enum { RED=0, BLUE=1, GREEN=2 } ps, ns;
Make sure to have one of the values be equal to 0, but the other values can be whatever you want. One last tip – if you want to print the value of an enum variable ps, you can call ps.name to return the string for that value (i.e. if ps is 0, ps.name will return RED).
**Example Finite State Machine**
Here’s an example of a simple sequential circuit, with all of its gory details. Note that this circuit computes parity – the output is true when the circuit has seen an odd number of trues on its input.
```
// Parity example
module parity (out, in, reset, clk);
output logic out;
input logic in, reset, clk;
logic ps, ns;
always_comb begin
ns = ps ^ in;
end
always_ff @(posedge clk) begin
if (reset)
ps <= 0;
else
ps <= ns;
end
assign out = ps;
endmodule
module parity_testbench;
logic in, reset, clk;
logic out;
parameter period = 100;
```
parity dut (.in, .reset, .clk, .out);
initial begin
clk <= 1;
forever #(period/2 clk <= ~clk;
end
initial begin
reset <= 1; in <= 0; @(posedge clk);
reset <= 0; @(posedge clk);
in <= 1; @(posedge clk);
@(posedge clk);
in <= 0; @(posedge clk);
in <= 1; @(posedge clk);
in <= 0; @(posedge clk);
@(posedge clk);
$stop();
end
endmodule
**Advanced Features – assert statements**
As you design larger systems, you will often have assumptions you’d like to make sure are true. For example, you may have a parameterized module, but there are only a few legal values of that parameter. Or, you may have a module that assumes the inputs obey certain requirements. You could check this via simulation, but as the design gets larger you are more and more likely to miss things.
The solution to this is the “assert” statement, that in simulation will raise an error whenever the value inside the assertion is false. So, if we have a parameter with only a few legal values, we can test it with an assertion inside the module:
```
initial assert(WIDTH>1 && WIDTH<=19);
```
If we require that at least one input to a unit must always be true, we can test it with an always-running assertion:
```
always_ff @(posedge clk) begin
assert(reset || a != 3’b000 || b);
end
```
**Advanced Features – generate statements**
Earlier in this tutorial we showed how to build a 16-bit register by using a hierarchy of modules, one that does a 4-bit register, and another that uses 4 of these 4-bit registers to build a 16-bit register. If we want to make a completely parameterized version, where
the size can be any length at all, we can use a generate statement. Generate allows us to put submodule calls and other logic within “for” loops and “if” statements, allowing the logic to decide the number of modules actually instantiated. Note that any iteration variables must be declared “genvar”, and any for loops or if statements must have a begin – end block with a label (an identifier, such as “eachDff” in the code below).
```verilog
module DFF_VAR #(parameter WIDTH=8) (q, d, clk);
output logic [WIDTH-1:0] q;
input logic [WIDTH-1:0] d;
input logic clk;
initial assert(WIDTH>0);
genvar i;
generate
for(i=0; i<WIDTH; i++) begin : eachDff
D_FF dff (.q(q[i]), .d(d[i]), .clk);
end
endgenerate
endmodule
Note that for the case of the register, one could just use a parameter and the FSM format to do the same thing:
```verilog
module DFF_VAR #(parameter WIDTH=8) (q, d, clk);
output logic [WIDTH-1:0] q;
input logic [WIDTH-1:0] d;
input logic clk;
initial assert(WIDTH>0);
always_ff @(posedge clk) begin
q <= d;
end
endmodule
But, the register is a simple way to show the power of generate statements, which can be useful in some situations that often cannot be handled in any other way.
|
{"Source-Url": "https://class.ece.uw.edu/271/hauck2/documents/VerilogTutorial.pdf", "len_cl100k_base": 8265, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 33726, "total-output-tokens": 9447, "length": "2e13", "weborganizer": {"__label__adult": 0.0005640983581542969, "__label__art_design": 0.001087188720703125, "__label__crime_law": 0.0003535747528076172, "__label__education_jobs": 0.0011148452758789062, "__label__entertainment": 0.00013458728790283203, "__label__fashion_beauty": 0.0003063678741455078, "__label__finance_business": 0.00019562244415283203, "__label__food_dining": 0.0005984306335449219, "__label__games": 0.0009756088256835938, "__label__hardware": 0.0494384765625, "__label__health": 0.0005640983581542969, "__label__history": 0.0003924369812011719, "__label__home_hobbies": 0.0005016326904296875, "__label__industrial": 0.0023097991943359375, "__label__literature": 0.0001913309097290039, "__label__politics": 0.00034880638122558594, "__label__religion": 0.0008893013000488281, "__label__science_tech": 0.0869140625, "__label__social_life": 8.004903793334961e-05, "__label__software": 0.01325225830078125, "__label__software_dev": 0.837890625, "__label__sports_fitness": 0.0005979537963867188, "__label__transportation": 0.0010929107666015625, "__label__travel": 0.0002598762512207031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29455, 0.03004]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29455, 0.86279]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29455, 0.83161]], "google_gemma-3-12b-it_contains_pii": [[0, 1330, false], [1330, 3302, null], [3302, 4448, null], [4448, 7003, null], [7003, 9009, null], [9009, 10076, null], [10076, 11565, null], [11565, 13933, null], [13933, 16149, null], [16149, 18219, null], [18219, 20253, null], [20253, 22130, null], [22130, 23696, null], [23696, 25592, null], [25592, 26587, null], [26587, 28208, null], [28208, 29455, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1330, true], [1330, 3302, null], [3302, 4448, null], [4448, 7003, null], [7003, 9009, null], [9009, 10076, null], [10076, 11565, null], [11565, 13933, null], [13933, 16149, null], [16149, 18219, null], [18219, 20253, null], [20253, 22130, null], [22130, 23696, null], [23696, 25592, null], [25592, 26587, null], [26587, 28208, null], [28208, 29455, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29455, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29455, null]], "pdf_page_numbers": [[0, 1330, 1], [1330, 3302, 2], [3302, 4448, 3], [4448, 7003, 4], [7003, 9009, 5], [9009, 10076, 6], [10076, 11565, 7], [11565, 13933, 8], [13933, 16149, 9], [16149, 18219, 10], [18219, 20253, 11], [20253, 22130, 12], [22130, 23696, 13], [23696, 25592, 14], [25592, 26587, 15], [26587, 28208, 16], [28208, 29455, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29455, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
c613e2d22581cb1fa7b15eb506c0a148de994e6a
|
[REMOVED]
|
{"len_cl100k_base": 9864, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 44232, "total-output-tokens": 13331, "length": "2e13", "weborganizer": {"__label__adult": 0.0007224082946777344, "__label__art_design": 0.0006089210510253906, "__label__crime_law": 0.0038661956787109375, "__label__education_jobs": 0.0009527206420898438, "__label__entertainment": 0.00021719932556152344, "__label__fashion_beauty": 0.0003025531768798828, "__label__finance_business": 0.00028896331787109375, "__label__food_dining": 0.00037384033203125, "__label__games": 0.0021228790283203125, "__label__hardware": 0.005870819091796875, "__label__health": 0.0007200241088867188, "__label__history": 0.0004563331604003906, "__label__home_hobbies": 0.0001461505889892578, "__label__industrial": 0.0005292892456054688, "__label__literature": 0.0005707740783691406, "__label__politics": 0.0004544258117675781, "__label__religion": 0.0005564689636230469, "__label__science_tech": 0.189208984375, "__label__social_life": 0.00017404556274414062, "__label__software": 0.09625244140625, "__label__software_dev": 0.69482421875, "__label__sports_fitness": 0.0002963542938232422, "__label__transportation": 0.0004117488861083984, "__label__travel": 0.00017321109771728516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53933, 0.03023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53933, 0.30384]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53933, 0.88811]], "google_gemma-3-12b-it_contains_pii": [[0, 2167, false], [2167, 5450, null], [5450, 8368, null], [8368, 11231, null], [11231, 14237, null], [14237, 16647, null], [16647, 18239, null], [18239, 20361, null], [20361, 22919, null], [22919, 24947, null], [24947, 27203, null], [27203, 29548, null], [29548, 32366, null], [32366, 32698, null], [32698, 34456, null], [34456, 37458, null], [37458, 40282, null], [40282, 40771, null], [40771, 42833, null], [42833, 46083, null], [46083, 50282, null], [50282, 53933, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2167, true], [2167, 5450, null], [5450, 8368, null], [8368, 11231, null], [11231, 14237, null], [14237, 16647, null], [16647, 18239, null], [18239, 20361, null], [20361, 22919, null], [22919, 24947, null], [24947, 27203, null], [27203, 29548, null], [29548, 32366, null], [32366, 32698, null], [32698, 34456, null], [34456, 37458, null], [37458, 40282, null], [40282, 40771, null], [40771, 42833, null], [42833, 46083, null], [46083, 50282, null], [50282, 53933, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53933, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53933, null]], "pdf_page_numbers": [[0, 2167, 1], [2167, 5450, 2], [5450, 8368, 3], [8368, 11231, 4], [11231, 14237, 5], [14237, 16647, 6], [16647, 18239, 7], [18239, 20361, 8], [20361, 22919, 9], [22919, 24947, 10], [24947, 27203, 11], [27203, 29548, 12], [29548, 32366, 13], [32366, 32698, 14], [32698, 34456, 15], [34456, 37458, 16], [37458, 40282, 17], [40282, 40771, 18], [40771, 42833, 19], [42833, 46083, 20], [46083, 50282, 21], [50282, 53933, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53933, 0.1476]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ca7b82abdfc6a7c66016318676d36df62d683952
|
[REMOVED]
|
{"Source-Url": "http://www.doc.ic.ac.uk/~cn06/pub/2012/sessionc/tools12.pdf", "len_cl100k_base": 9839, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 47424, "total-output-tokens": 12373, "length": "2e13", "weborganizer": {"__label__adult": 0.0003631114959716797, "__label__art_design": 0.0002810955047607422, "__label__crime_law": 0.0003178119659423828, "__label__education_jobs": 0.00048065185546875, "__label__entertainment": 7.873773574829102e-05, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.0001977682113647461, "__label__food_dining": 0.0003814697265625, "__label__games": 0.00072479248046875, "__label__hardware": 0.0015134811401367188, "__label__health": 0.0005292892456054688, "__label__history": 0.00029754638671875, "__label__home_hobbies": 0.00010883808135986328, "__label__industrial": 0.0005178451538085938, "__label__literature": 0.0002199411392211914, "__label__politics": 0.0002951622009277344, "__label__religion": 0.0006103515625, "__label__science_tech": 0.035980224609375, "__label__social_life": 8.869171142578125e-05, "__label__software": 0.00528717041015625, "__label__software_dev": 0.9501953125, "__label__sports_fitness": 0.00040650367736816406, "__label__transportation": 0.0007867813110351562, "__label__travel": 0.0002493858337402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46812, 0.01709]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46812, 0.38502]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46812, 0.83674]], "google_gemma-3-12b-it_contains_pii": [[0, 2746, false], [2746, 6546, null], [6546, 8395, null], [8395, 11155, null], [11155, 14133, null], [14133, 17271, null], [17271, 20516, null], [20516, 23505, null], [23505, 26383, null], [26383, 29509, null], [29509, 32056, null], [32056, 34867, null], [34867, 37042, null], [37042, 40127, null], [40127, 43443, null], [43443, 46812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2746, true], [2746, 6546, null], [6546, 8395, null], [8395, 11155, null], [11155, 14133, null], [14133, 17271, null], [17271, 20516, null], [20516, 23505, null], [23505, 26383, null], [26383, 29509, null], [29509, 32056, null], [32056, 34867, null], [34867, 37042, null], [37042, 40127, null], [40127, 43443, null], [43443, 46812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46812, null]], "pdf_page_numbers": [[0, 2746, 1], [2746, 6546, 2], [6546, 8395, 3], [8395, 11155, 4], [11155, 14133, 5], [14133, 17271, 6], [17271, 20516, 7], [20516, 23505, 8], [23505, 26383, 9], [26383, 29509, 10], [29509, 32056, 11], [32056, 34867, 12], [34867, 37042, 13], [37042, 40127, 14], [40127, 43443, 15], [43443, 46812, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46812, 0.05556]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
0cf83d4f21db7b8003bc42d69e667c56e55379f0
|
HealthLinks: A ColdFusion Web Application
Brian Westra
SUMMARY. Libraries are beginning to use Web applications as they grapple with sites of increasing complexity, and the move of more user services to the Web. This article reviews the basic concepts of a Web application, and outlines some of the features of the HealthLinks Web application and site <http://healthlinks.washington.edu> at the University of Washington Health Science Libraries, and the transition from a Java-based application to ColdFusion. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <getinfo@haworthpressinc.com> Website: <http://www.HaworthPress.com> © 2002 by The Haworth Press, Inc. All rights reserved.]
KEYWORDS. Web application, ColdFusion, database-driven Web site
WEB APPLICATION OVERVIEW
As much of the business world has embraced the Internet, e-commerce innovations and changing user expectations have prompted li-
Brian Westra (brian.westra@metrokc.gov) is Head Librarian, Hazardous Waste Management Program, 130 Nickerson Street, Suite 100, Seattle, WA 98109.
The author would like to acknowledge the following members of the HealthLinks team: Debra Ketchell, Deputy Director, Health Sciences Libraries; Leilani St. Anna, Information Management Librarian; Emily Hull, Head of Information Systems; Adam Garrett, Senior Computer Specialist; Casey Hagan, Student Programmer (ColdFusion database maintenance tools); Cliff Olmsted, Student Systems Administrator (Linux and Apache); and Joanne West, Usability Studies and Webtrends Reports.
[Haworth co-indexing entry note]: “HealthLinks: A ColdFusion Web Application.” Westra, Brian. Co-published simultaneously in Internet Reference Services Quarterly (The Haworth Information Press, an imprint of The Haworth Press, Inc.) Vol. 7, No. 1/2, 2002, pp. 63-88; and: Database-Driven Web Sites (ed: Kristin Antelman) The Haworth Information Press, an imprint of The Haworth Press, Inc., 2002, pp. 63-88. Single or multiple copies of this article are available for a fee from The Haworth Document Delivery Service [1-800-HAWORTH, 9:00 a.m. - 5:00 p.m. (EST). E-mail address: getinfo@haworthpressinc.com].
© 2002 by The Haworth Press, Inc. All rights reserved.
Libraries to examine their own online services. In this process, librarians and systems staff are confronted with budgetary and staff constraints, the real or anticipated expectations of users regarding Web-enabled services, and a reluctance to embrace and internalize business and development practices and philosophy. However, libraries are clearly about customer service and any organization interested in improving customer service quality can benefit from examining the successful use of design and technology by excellent service companies.\(^1\) It is therefore worthwhile to note library-oriented articles on portals,\(^2,3,4\) database-driven sites,\(^5,6\) and the confluence of electronic commerce and digital libraries.\(^7\)
In this regard, middleware and Web applications are beginning to see judicious use by some libraries. An exact definition of middleware is difficult to come by, but in a general sense it enables interaction between components (database, e-mail, Web server), and simplifies the programming model for the developer.\(^8\) Web applications are one class of middleware. Web applications can range from “static Web pages, to searchable site/dynamic Web pages, and applications that integrate with operational databases,” including customer-driven Web transactions.\(^9\) In a general sense, a Web application is
> a Web system (Web server, network, HTTP, browser) in which user input (navigation and data input) affects the state of the business. This definition attempts to establish that a Web application is a software system with a business state, and that its front end is in large part delivered via a Web system.\(^10\)
These descriptions exemplify the influence that business practices and applications have had upon the terminology. In an excellent review, Fraternelli regards the Web application as “characterized by a direct business-to-customer relationship.”\(^11\) On a more concrete level, the technology should attempt to meet some or all of the following requirements:
- handling both structured and non-structured data;
- support exploratory access through navigational interfaces;
- high level of graphical quality;
- customization and possibly dynamic adaptation of content structure, navigation primitives, and presentation styles;
- support of proactive behavior (recommendation and filtering).\(^12\)
Fraternelli points out that these requirements may be in conflict with the following technical and administrative objectives:
- security, scalability, and availability;
- interoperability with legacy systems and data;
- ease of evolution and maintenance.¹³
Web application servers will gain greater acceptance by systems staff as they prove capable of minimizing complexity in dealing with multiple platforms and standards, security, and system-level management functions.¹⁴ Fortunately, a number of products are able to support these needs.
Most models of a Web application are based on a three-layer approach: presentation layer (user-friendly interface), business-logic layer (implements the logic and provides services to the presentation layer), and the system layer, which is responsible for data storage and network requirements.¹⁵ Business logic can also be defined as the piece that “links activities, actors, and resources together within and between companies in buyer-seller relationships.”¹⁶ It is relatively easy to extrapolate this definition to the relationship between the library and the patron, e.g., someone using the online circulation systems to renew a book, or setting up a table of contents alerting service for their commonly read journals.
Falk provided some earlier concepts of database-enabled library Web sites or site components that were precursors or have been incorporated into Web applications, such as dynamic page generation, periodical lists, and full-text collections not found in the catalog.¹⁷ These exemplify how Web applications can be used to improve online services to users. Antelman gives an excellent review of the concepts behind library database-driven sites, and the Web management opportunities afforded by Web application servers. Web applications may be a necessary step in building sites of increasing complexity, integrating heterogeneous, distributed resources, while affording some aspects of information management and organization to both library staff and the end user.¹⁸ Application servers such as Macromedia’s ColdFusion (tm) provide Web site integration tools, open database connectivity (ODBC) features, and integrated development environments (IDEs)¹⁹,²⁰ that allow developers to more easily construct Web applications.
HealthLinks OVERVIEW
The HealthLinks site <http://healthlinks.washington.edu/> is a critical resource for information for faculty, staff and students in the Health Sciences at the University of Washington. The site also serves the various medical centers associated with the university, faculty and students in the five-state WWAMI region (Washington, Wyoming, Alaska, Montana, and Idaho), and the public. Ketchell and Hull have provided a good overview of the site’s history, and its utility as a portal for serving the medical clinician.\textsuperscript{21,22} The site provides access to textbooks, journals, databases, and other relevant resources for students and faculty, as well as information for the clinician and researcher. Information is presented in the form of role-based “toolkits” (see Figure 1), and topical pages. Each toolkit is developed with and revised in response to user feedback. The site receives significant usage; a three-month average for autumn 2001 indicates the site receives approximately 16,400 hits per day, or 500,000 per month. Work on the site and the underlying database is a collaborative effort, and includes departmental liaisons, serials and systems staff, and others.
Like most library sites, HealthLinks was strictly “hand-maintained” HTML at its inception in 1994, and remained so until 1998. Under funding by an Integrated Advanced Information Management System (IAIMS) grant from the National Libraries of Medicine, a database was developed to maintain several thousand links to commercial and locally developed content. Apache Web server was and continues to be used for static HTML pages and CGI scripts. The move to the use of database content in 1998 was driven by several factors. These included a desire to reduce the number of hand-maintained HTML pages; to facilitate subject experts’ access directly to the data and its organization that could be extrapolated to the site, rather than via HTML editing; and to improve search granularity that could not be achieved through a full-site search engine, but might be offered by a database approach.\textsuperscript{23}
At the time of the move from flat text to a database-driven site, approximately 150 out of several thousand Web pages of the site were set up to be generated from a database. These 150 pages changed regularly to remain up-to-date and to reflect changing user needs, and were therefore optimal candidates for dynamic or database-generated content. The Java-based Web application wrote out the 150 pages to the Web server, and provided an interface through which library staff could enter and edit records and relationships in the database, which directly affected the organization and content of the generated pages on the site. Distrib-
uated access to the database maintenance tools was important, since a variety of library staff do data entry and may work from home as well as the office. While this type of Web application is perhaps not as dynamic or synchronous as a shopping cart application for an e-commerce site, it provides a foundation for understanding the site in relation to current terminology. It is worth noting that library staff are also users of the site, as they use the public site and the database maintenance tools to provide instruction and information services and maintain electronic collections and serial subscription information.
The initial Web application was built with JavaServer page utilities and Java servlets to generate the Web pages and provide an interface for the data tools. Page generation was provided by combining information from database queries with HTML tags into text files that were then
written out to the Web server. The database tools allowed staff to edit records and establish relationships in the database, which were used for these page generation queries.
The application utilized IBM Websphere application server and a SQL Server 6.5 database with approximately 5,000 resource records. For the purposes of the HealthLinks database, a “resource” is a unique record, typically pointing to a Web site or specific online source, with information stored in a mixture of Dublin Core metadata and locally produced fields, including URL, title, and associated keywords (see Figure 2). These resources can be grouped together within categories (and subcategories), which are then associated with one or more topics. The basic page of the site that is generated from the database has as its foundation a topic, and from zero to several categories with their respective resources. For instance, in Figure 1, the topic is “Student Care Provider Toolkit,” and the categories are “MEDLINE and Full-Text Journals,” “Drug Reference,” “Key Resources,” etc. The resources are the links under each category.
A Java programmer/project administrator and student programmers developed the database and Web application for the site, and it went through iterative changes during the next two years. While the site was successful by many metrics, the Java-based Web application required a programmer to make any changes in the database maintenance compo-
<table>
<thead>
<tr>
<th>Dublin Core Fields</th>
<th>UW Local Fields</th>
</tr>
</thead>
<tbody>
<tr>
<td>Title</td>
<td>Topic</td>
</tr>
<tr>
<td>Resource Identifier</td>
<td>Category</td>
</tr>
<tr>
<td>Subject/Keywords</td>
<td>Subcategory</td>
</tr>
<tr>
<td>Format</td>
<td>Access Restriction</td>
</tr>
<tr>
<td>Language</td>
<td>UW Resource</td>
</tr>
<tr>
<td>Author</td>
<td>Class Number</td>
</tr>
<tr>
<td>Rights Management</td>
<td>Bookplate</td>
</tr>
<tr>
<td>Description</td>
<td>Help</td>
</tr>
<tr>
<td>Publisher</td>
<td>Notes</td>
</tr>
<tr>
<td>Other Contributor</td>
<td>Unique ID</td>
</tr>
<tr>
<td>Date Published</td>
<td>Record-Created-Date</td>
</tr>
<tr>
<td>Resource Type</td>
<td>Record-Last-Modified-Date</td>
</tr>
<tr>
<td>Source</td>
<td>Record-Created-By</td>
</tr>
<tr>
<td>Relation</td>
<td></td>
</tr>
<tr>
<td>Coverage</td>
<td></td>
</tr>
</tbody>
</table>
nents or the code that generated the HTML pages. In addition, some of the search capabilities that were an original goal of the transition remained unrealized, though Java could have been used for this feature.
**CONVERSION PROJECT**
In 2000 it was decided to convert the entire Web application from Java to ColdFusion for the next iteration of the HealthLinks site. This decision was based on several factors, including: the flexibility afforded by the rapid development environment of ColdFusion; relatively low ongoing development cost compared to hiring a Java programmer; more intuitive approach to the code for non-technical staff; utility with both Windows and Linux platforms; and adoption of ColdFusion for Web applications by several other health sciences libraries.
The conversion from Java to ColdFusion focused on three components: (1) static page generation; (2) database maintenance tools; and (3) new dynamic pages and search features as they became possible. Several constraints were placed on this conversion phase. The database schema could not be altered, since the Java-based database maintenance tools would continue to be used while the site generation tools were being replaced by ColdFusion templates. Secondly, while the public site could not be substantially changed in its overall design, changes to improve workflow in the database maintenance tools and new search features for the public site were expected. The new Web application enabled the HealthLinks team to contemplate other components and search features for the site as the project progressed. At recent count, the number of generated pages had risen to 294, and there were 3,667 hand-maintained HTML files in the production site.
It is good practice to approach a Web application project from the perspective of a development cycle, even if the project itself will be based on rapid development and prototyping. The development cycle emphasizes requirement-gathering, modeling, and prototyping, and is intended to help the development team clarify and meet checkpoints throughout the process, and avoid project “creep.” Requirements throughout the project were based on the same site generation capabilities, database access, and workflows that were provided by the Java application. Once the database schema and queries had been established, a “proof of concept” group of templates were developed that could generate the same pages as the existing application. Following this the remaining page generation templates were developed and implemented. The final
steps involved building new data entry tools, and this process was more iterative in nature since workflow improvements could be realized along the way with input by staff and librarians.
**ColdFusion**
ColdFusion Web application server is produced by Macromedia, and when coupled with the Studio editor, provides a rapid development environment in which coding and scripting, database connectivity and queries, and HTML tags can be quickly combined within a single editor that is easy to personalize and reconfigure (see Figure 3). ColdFusion allows the developer to address issues related to multiple platforms and standards, security, and system management on several levels, in the
**FIGURE 3.** ColdFusion Studio editor provides an integrated development environment for editing CFML, HTML, and accessing the database tables to quickly build queries.
code that constructs the application, and through configurations in the server administration console.
ColdFusion employs a tag-based approach to programming, where functionality is encompassed within tags rather than needing to be explicitly scripted, and offers some familiarity to those acquainted with HTML, as its namesake ColdFusion Markup Language (CFML) implies. Some understanding of relational databases and structured query language (SQL), and an affinity for basic programming concepts will be useful for those new to application development. Advantages of the use of ColdFusion are the ease and speed of application development due to the tag-based coding features, cross-platform capabilities, a large user base and support forum. Libraries use ColdFusion for various applications. Developers are not limited to the present group of CFML tags, since user-defined functions and custom tags can be created for specific purposes, and there is a library of custom tags on the Macromedia site. It is generally easier and faster to create functional applications using ColdFusion than by the use of more complex Web application platforms, such as Active Server Pages (ASP) and PHP, though there are plenty of library applications built with those tools. Ultimately, the choice of an application server is highly dependent on considerations that should include staff skills, training, financial resources, preexisting software and hardware, and a commitment from administration and systems to support the choice.
The basic setup for a ColdFusion application calls for a Web server (UNIX/Linux or Windows OS), with ColdFusion, and an ODBC database on the same machine or another. A typical dynamic page generation occurs in the following way. When the Web server receives a page request that calls for a CFML template or other processing (.cfm or .cfml), it routes the request to the ColdFusion application server. The ColdFusion files (“templates”) are written in CFML, and reside on the ColdFusion/Web server. When they are executed they may run queries against the database or do other processing, then combine this information with HTML tagging, and output the resulting file(s) to the client via the Web server.
**Documentation**
Many Web site developers fail to provide formal documentation, due to the iterative, user-centered approach that is taken with smaller and medium-size sites. This can be true of any development process, and unfortunately, the database and Java-based tools lacked much of the in-
formation that would have eased the conversion. Therefore, another explicitly stated goal of the project was to develop clear documentation. The resulting ColdFusion templates contain numerous CFML comments about specific sections of code, and there are separate documents to provide overviews of the file generation templates, the database, database maintenance tools, and hardware and software configurations.
Database
Because the database and Java code lacked documentation, anecdotal information from the HealthLinks team and examination of the Java code were used to determine how the database schema evolved, and what role the various tables played. Several tables were found to be residuals from older revisions of the database design, and establishing their true role proved to be time consuming. This led to an incremental process for the development of the initial page generation templates, until all relevant tables and relationships were defined.
The current database schema is shown in Figure 4. The most basic content component of the HealthLinks database is the resource record. The database diagram shows how resources are related to keywords and other descriptors of the resource. Resources are then related to specific Topic, Topic/Category, or Topic/Category/Subcategory groups, through the Include and Related tables. Further, most pages are generated from the database at the Topic level; that is, most of the pages have a single topic and zero to many categories.
A number of stored procedures are used in the Web application. Stored procedures allow a query to be optimized and stored on the database, rather than in the CFML. This enables the CFML code to be somewhat independent of the database schema, and in databases of larger size and complexity, stored procedures may provide some advantages in overall processing speed for the Web application. As part of the overall project, naming conventions were agreed upon for the stored procedures, and other components within the CFML such as query and variable names.
Site Architecture/Hardware
The ColdFusion Web application was developed in a separate environment from the live site and production database. A copy of the database was exported to a server running Windows NT (later Windows 2000), ColdFusion Web application server and Studio, Apache Web
server (later Microsoft Internet Information Services 5.0), and Microsoft SQL Server 7.0. In time, a full development architecture was set up to mirror the anticipated production site. In an ideal world, production and development architectures would generally mirror each other, for purposes of working through configuration and site architecture issues, and load testing of templates before moving to the production server. In reality, suitable results for smaller sites with simple database back-ends can often be achieved with single servers for development and production. Some libraries have used file-based databases such as Microsoft Access, but larger and more complex databases and heavy Web loads usually require a client-server database, and segregation of the database server from the ColdFusion/Web server can improve performance.
Several older servers which were to be phased out were “recycled” into use for the development site, which was ultimately composed of an Apache Web server running on Linux, and two Windows 2000 servers, one running Internet Information Services (IIS) 5.0 and ColdFusion 4.5, the other devoted to SQL Server 7.0. The production site, which came online in April 2001, has four Dell 2,450 servers, with dual 733 MgHz processors, 1 Gb RAM, and 256 Kb cache. As with most dynamic sites, the greater the RAM, the better the capacity to cache pages and query results, and therefore to handle higher traffic loads. Two servers run Red Hat 7.1 and Apache, with failover between them, for the static HTML and CGI scripts of the HealthLinks site (see Figure 5). Another server runs Windows 2000 and is devoted to ColdFusion/IIS, where the Web application server and Web server run. The database (SQL Server 7.0) resides on the fourth server, also on Windows 2000.
The templates reside on the ColdFusion/IIS production and development servers. Several of the templates call stored procedures on the database, or directly query the database tables, and write out static HTML pages to the Linux/Apache machines. Other templates query the database and populate Verity indexes (called collections) for use with the Verity 97 search engine that comes bundled with ColdFusion. These collections provide full text, Boolean searches, and are run via ColdFusion templates as well.
Page Generation, Verity Indexing, and Logs
Basic queries of the database were tested to determine the correct relationships and optimal SQL statements, and these were later incorporated into the database as stored procedures. By combining the “order
by” statements in the queries and nested output, various configurations of the resulting pages can be achieved.
Four templates have been created to write out static HTML files to the Apache Web server and ColdFusion server for the public HealthLinks site. These templates cover four types of pages, each of which utilizes a different query of the database. The templates generate topic/category pages, e-journal pages for browsing by journal title, topic/category/sub-
category pages (Molecular Biologist toolkit), and statistics resources pages for browsing by topic. These pages may change from day to day, but the information is generally not so fluid as to require “on-the-fly” dynamic generation. In addition, the production and use of these static pages provide information that is up-to-date for the user, while reducing the overall load on the application server and database that dynamic pages would require.
HealthLinks makes use of server side includes (SSIs) to produce the navigational structures on the top, bottom, and side of each page. Apache can be enabled to include SSIs, but a Web server can only employ one type of server side processing on a given page. Since ColdFusion is a type of server side processing, ColdFusion templates can not use the SSI method, but instead use the CFML equivalent, called a <cfinclude> tag, which can point to a file with the .ssi, .cfm, or other extension. This allows the same SSIs to be copied across both Web servers for equivalent content and function in the static and dynamic pages, no matter what the server. In a limited number of cases, certain SSIs for the static and dynamic pages are generated on a daily basis, where the content of the SSI might change from day to day. For instance, the e-journal search box has an alphabetical title browse list, which may change as subscriptions are added or dropped, so this SSI is generated from the database, and written out to both the Apache Web server (for static e-journal pages) and the ColdFusion server (for e-journal title search results pages).
The ColdFusion templates call the stored procedures as specified in their code and use the results to write out HTML files to the appropriate directories on the Apache Web server. The page generation templates are scheduled to run once each day, overwriting the previous day’s files. When these scheduled templates run, they also append information to a log file to record template name, query results, and which HTML files were generated.
Search Features
The HealthLinks site is popular, and the use of the Verity search engine that comes bundled with ColdFusion Web Application server has proven to be a scalable approach to searches that meets user needs. In the Java-based Web application, the only search option on the site was a full-site Web search, via a Webinator/Thunderstone search engine. Search results could have been configured to a greater degree, but users found the search results display confusing, and expressed a desire to
have a more targeted search that would yield only the resources from the HealthLinks database, rather than whole Web pages.
Web site searches are now carried out via an Inktomi/Ultraseek search engine, which is run by the University Libraries. ColdFusion templates and the Verity 97 search engine provide the other search options on the site. Any search request that originates on the HealthLinks site, including an Inktomi search, has its search terms recorded by a ColdFusion template before being processed. Hit counts are also recorded for all the Verity searches. This data will enable us to analyze the types of searches (journals vs. other resources), phrase vs. single term searches, and the relationship between search requests and navigational structures and terminology.
The primary search feature is the site-wide resources search, which searches titles and descriptions of resources in the HealthLinks database. The search form is located at the top of all HealthLinks pages, replacing the Webinator search, and therefore has become the default choice. The precise location of the search form and associated text was determined based on a small usability test. There were approximately 18,000 searches per month from September through October 2001; 12,000 were resource searches, 670 were Web site searches, and 5,000 were e-journal title searches.
The e-journal search is a new feature that accompanied the move to ColdFusion. It is accomplished via a Verity collection created by a query of the resource titles related to the e-journal topic. When a search is run against this collection, a list of resource ID numbers for matching e-journal titles is returned to the template. The ColdFusion template then runs a query of the HealthLinks database to return the related information for each resource ID. This information is put into alphabetical order, which is served to the client in an HTML file via the IIS Web server.
The resource search is also conducted against a Verity collection. This collection or index is built from a query that returns the titles and keywords from all “unsuppressed” resource records. Resources can be added to the database, but blocked from inclusion in Verity collections or site generation by means of a checkbox on the Resource data entry form in the database maintenance tools.
Another recently added search is an index of approximately 250 resource records for print and online statistical sources. This search employs a collection of titles, descriptions, and keywords created from a query of only those resources that contain the statistics keyword, which is used to indicate that a resource is primarily statistical in nature. This
keyword is used to create this Verity index, and to generate approximately 130 static pages for browsing by statistical topic.
**Load Testing**
Load testing is a valuable tool, but many libraries and smaller enterprises do not account for this in their development cycle. A primitive form of load testing was carried out by means of a ColdFusion template that could cycle at a specified time interval. The template ran a basic query of the database that employed relationships between several tables, and therefore provided a good approximation of a typical query. Multiple instances of the template were run, and parameters of the IIS server were logged for analysis. This testing indicated that the application as designed was scalable beyond the anticipated number of hits per minute.
**Scheduling Template Activity**
Templates for site generation and Verity collection indexing were scheduled to run daily, by means of the ColdFusion Administrator console. These templates and the database maintenance tools are run in a virtual directory with a different port number to isolate them from the search templates for scheduling purposes, and to allow their activity to be logged in a different IIS log than the search template activity. Because the templates are resource-intensive, they were scheduled to run at a time when public use of the site was lowest. However, it was later found that search engine spiders or robots indexing the site would hit all of the information links on a given page within the space of several seconds. Each of those links runs a query and dynamically generates a page. If this occurred at the same time as the CPU-intensive site generation templates were scheduled to run, the Web application server could slow down or lock up. Rescheduling the site generation templates countered this problem. Another possible solution is to generate static pages for all of the information links, rather than pulling them from a dynamic query, or to move more of the information for that part of the site to a database view, which would require less of the database server’s resources. Timeout for the templates in the Scheduler is independent of the server-wide timeout setting (which was set to 10 seconds), and some of the scheduled templates take up to 25 seconds to run.
Database Maintenance Tools
Browser Compatibility. Because some of the templates in the database maintenance tools make use of JavaScript, it was decided to design this part of the Web application to use Internet Explorer (IE) 5.x, so that template development did not have to follow parallel paths to accommodate differences in how JavaScript is handled by Netscape and IE. This choice is possible since there is a limited population of librarians and staff working with the database tools, and they have ready access to IE on their desktop machines. Since IE was chosen, it was also decided to employ Cascading Style Sheets (CSS) for much of the display features, as CSS enables the developer to quickly and easily alter fonts, colors, and other Web page attributes, and IE does a good job of displaying standards-conformant CSS.
Pubcookie/UWNetID Authentication. Distributed secure access was enabled via secure sockets layer (SSL) with a Thawte certificate, and the PubCookie/University of Washington network ID (UWNetID) authentication system. Pubcookie is a centralized authentication system developed by the University of Washington Computing & Communications Department. It is composed of software installed on the server, and the Weblogin server administered by the University. This software is available for Apache and IIS servers. The two components together enable a server administrator to authenticate user access to a particular Web directory by their UWNetID and password, and to set a timeout for this authentication. Upon authentication, a cookie is set. A ColdFusion custom tag was developed which tests the cookie value against a list of authorized UWNetIDs for that particular Web directory. Authorized users are allowed to use the application, while unauthorized users are bounced out to a different directory with an appropriate message. This method is simple, and while it has its limitations, it is adequate to the needs at this time.
Every time a user accesses a file in the database maintenance tools, Pubcookie checks for authentication via the cookie that is saved on the user’s machine after the initial login. If the user is authenticated, the next step is that the custom tag is called from within that directory’s Application.cfm template. The tag has several attributes that are passed to it in the call, including a list of allowed UWNetIDs, which are compared against the Pubcookie cookie value, and a message to users that are not in the list of authorized users.
Tools Overview. The database maintenance tools are a collection of ColdFusion templates aiding staff and librarians in the entry and organi-
zation of data in a SQL server database. The database maintenance tools integrate some of the following features: improvements in resource lookup; data validation rules; context-sensitive user authorization; user timeout; and automatic recording of the user doing record maintenance.
Components of the data entry tools are the ColdFusion templates, SQL Server database, and the Pubcookie software. As with the page generation templates, stored procedures on the database are used throughout this part of the application for increased performance and to isolate some of the database design/schema from the ColdFusion code. Another feature of ColdFusion, transaction blocks, is used for all activities that modify data in the database, whereby a series of queries is either enacted or completely rolled back if it is not completed. This protects data integrity across related tables. Custom tags and server side includes are employed to modularize the code and facilitate code reuse.
The database maintenance tools are not publicly accessible, but several screenshots and descriptions may illustrate some of the more pertinent features. Authorized users can modify resource records, topics, categories, subcategories, relationships between the parts of this hierarchy, and the name and location of the HTML file to be created for a particular topic. The user can also view lists of some of these records, such as the keywords and “orphan” resources that are not currently related to a particular topic but may be important to include for the public site’s resource search.
A problem with the Java application became evident as the tables were reviewed. Several tables contained duplicate records, indicating that the business logic had not included sufficient data validation rules for all tables. These were incorporated into the new application, so that duplicates could not be entered. If a term that the staff person is attempting to add to the database already exists, the user is informed that it already is in the database. Otherwise, the term is added to the database, and the user is informed once the update is completed.
**Editing Resources.** The lookup template is the heart of the resource data entry templates, and was revised with input from staff to improve the workflow over the previous search and input forms. All fields are labeled, and required fields have red labels. The only restriction other than a required field is that the URL must be a unique value among resource records in the database. After clicking on the Save button, the fields are parsed and saved into the appropriate tables in the database. The template also checks to see if the resource record in the database has changed since it was delivered for editing.
If the validation process finds that the data has indeed been changed, it redirects the user to a page that displays the original information, along with information submitted by the user. In addition, for those fields where another user has revised the data, that information is shown. If the user wants to go ahead with the update, the template parses the new information again but the check for changed data is not initiated, and anything submitted over-writes the existing record in the database.
Relations Tree. One of the more complex display features is the relations tree form. The CFTree tag was used for the relations tree form, because it replicated what was found in the Java Web application, and its Java applet provided the necessary display and data manipulation functions (see Figure 6). In the right window is a CFTree containing all relations in the database that are available to be associated with the resource record being edited. The CFTree applet shows topics (folders), which can be expanded to show existing category components and their subcategories, if they exist. On the left are the current relations associated with the resource, again showing Topic, Category, and Subcategory. When a relationship is added or deleted, the windows update immediately, so the user can see which topics, etc., the resource is related to.
FIGURE 6. Relations Tree form, showing the relation options, and those that have been chosen for resource number 2222, Gene Tests. In this case, the resource has been associated with the Key Resources category, in the Pediatrics topic, among others (see the window on the left).
Universe, Topic, Category, Subcategory, and Relationship Tools. Individual universes, topics, categories, and subcategories can be added, deleted, or revised by means of a single group of templates. The universe table and relationships are a carryover from a previous version of the database. HealthLinks is currently the only universe, but this level in the hierarchy may be used at a later time. Code reuse for this group of data tools is made possible by custom tags that call for different functions, based on the information that is passed into them. For instance, working with topics passes the relevant table and query information into a common template that builds a form for either universe, topic, category, or subcategory data entry. Figure 7 shows the topic data entry form, but the same template builds the forms for editing universe, category, and subcategory records.
Whether the user is adding, deleting, or revising a record, a confirmation message is presented as part of the process, and in cases where the record is related to others, the user is given an opportunity to accept or decline the action before it takes place.
Relationships, Page Mapping, and Viewing Generated Pages. Several other forms complete the database maintenance tools. After the Topic, Category, and Subcategory terms have been entered into the database, they can be related to each other as a hierarchy (see Figure 8). Once these relationships are established, they will appear in the Relation Tree for the resources, and staff can then relate a selected resource to the particular Topic/Category (and Subcategory) combination.
Pages to be generated are given a specific file name and path in the Page Mapping tool, so they can be written to the Web server (see Figure 9). This information is stored in the File table in the database. Any of the pages to be generated can be previewed within the database tools so that staff can view it before it is generated as part of the production site.
CONCLUSIONS
The conversion to ColdFusion Web application server was successful, and has enabled the HealthLinks team to develop features on the site that had previously been long-term goals. Through their participation, non-developer team members also achieved a better understanding of the database schema, limitations and capabilities of a Web application, and the new opportunities that might be realized in a rapid development environment.
Documentation for any technical project is an issue that must be addressed in an ongoing manner. Lack of information can cause critical delays, particularly in environments where the institutional memory is crippled by staff turnover, and this proved to be the case with the transition from the previous HealthLinks Web application. Commenting out code and writing documentation are tasks that most developers would
rather avoid, as they are time consuming and there is little immediate impact on the development process to show for it. It can also be difficult for an overworked manager to put together a rigorous review of the documentation in preparation for the worst-case scenario. However, project managers should give this component due emphasis throughout the development cycle.
A rapid development environment, such as afforded by ColdFusion, is valuable to the organization if the developer or team can avoid being caught up in a continuous prototype-test-prototype-test cycle. Clear requirements and checkpoints aided in this regard, as did a commitment by project managers to stay on task. An issue for some libraries may be the initial purchase cost for proprietary solutions such as ColdFusion. It is also important to remember the costs of development and long-term maintenance for any given approach, including the time to production, training, and staff and infrastructure requirements.
As has been noted, the resource search in the HealthLinks database became a reality with the new Web application, and it is extremely popular. Approximately 12,000 searches per month in the autumn of 2001 were resource searches. The revised but prominent placement of the search form on all HealthLinks pages, and emphasis on the utility of this new search feature in library instruction, will account to some degree for its popularity. The usability testing associated with the resource and Web searches showed that users are still confused about the differences between these two searches. Only about 670 searches per month were Web site searches, and this option is not as visible as the resource search. It is difficult to find terminology and a search interface that will clearly differentiate these two types of searches, and most users will simply change search syntax or terms, rather than investigate help files or search suggestions.
Other searches recently added to the site include the e-journal title search and statistics resources search. E-journal title searches accounted for another 5,000 searches per month. Logging of search terms and type of search will enable the HealthLinks team to more quickly analyze search strategies and their relationship to terminology, navigation structures, and organizational features. Dynamic page generation also allows staff to include links to URLs with embedded queries for inclusion of “on-the-fly” dynamic content, which previously was not possible.
Custom tags and server side includes are employed to modularize the code to some degree, though more could be done in this area, particularly if a Fusebox coding methodology was followed for the entire application. This approach enables abstraction of the code and code reuse, and may enable future staff to quickly ascertain the functions of each template. However, this approach tends to take longer at the outset, and requires a higher level of programming in the development stage.
Future projects for HealthLinks may include improvements in the indexing terminology, and a refined resource keyword list would lead to improvements in searching. Changes to the database schema may be considered at a later time, as will overall site design modifications. These issues are being investigated by means of other grant-funded projects, which can provide flexible, “test-bed” approaches to the use of the HealthLinks database for applications focused on more specific needs of clinicians.
NOTES
12. Fraternelli, 228.
13. Ibid, 228.
23. Ibid.
25. Hull.
40. Davidson, 34.
41. Felts, op. cit.
**OTHER RESOURCES**
|
{"Source-Url": "https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/10486/BWestra.pdf?sequence=1", "len_cl100k_base": 8930, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 48510, "total-output-tokens": 12366, "length": "2e13", "weborganizer": {"__label__adult": 0.0007123947143554688, "__label__art_design": 0.0011348724365234375, "__label__crime_law": 0.0009603500366210938, "__label__education_jobs": 0.0687255859375, "__label__entertainment": 0.0002803802490234375, "__label__fashion_beauty": 0.00040078163146972656, "__label__finance_business": 0.003725051879882813, "__label__food_dining": 0.0007967948913574219, "__label__games": 0.0008707046508789062, "__label__hardware": 0.00196075439453125, "__label__health": 0.01776123046875, "__label__history": 0.0009851455688476562, "__label__home_hobbies": 0.0003218650817871094, "__label__industrial": 0.0005655288696289062, "__label__literature": 0.0011987686157226562, "__label__politics": 0.0004222393035888672, "__label__religion": 0.0009303092956542968, "__label__science_tech": 0.06915283203125, "__label__social_life": 0.0004987716674804688, "__label__software": 0.22802734375, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.0005612373352050781, "__label__transportation": 0.0008211135864257812, "__label__travel": 0.0005993843078613281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52419, 0.03411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52419, 0.30609]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52419, 0.91269]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2259, false], [2259, 4620, null], [4620, 6911, null], [6911, 9672, null], [9672, 10577, null], [10577, 12745, null], [12745, 15298, null], [15298, 16157, null], [16157, 18681, null], [18681, 21017, null], [21017, 21056, null], [21056, 23615, null], [23615, 24085, null], [24085, 26623, null], [26623, 29315, null], [29315, 31615, null], [31615, 34260, null], [34260, 37012, null], [37012, 38643, null], [38643, 41078, null], [41078, 41494, null], [41494, 43428, null], [43428, 45941, null], [45941, 48568, null], [48568, 51576, null], [51576, 52419, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2259, true], [2259, 4620, null], [4620, 6911, null], [6911, 9672, null], [9672, 10577, null], [10577, 12745, null], [12745, 15298, null], [15298, 16157, null], [16157, 18681, null], [18681, 21017, null], [21017, 21056, null], [21056, 23615, null], [23615, 24085, null], [24085, 26623, null], [26623, 29315, null], [29315, 31615, null], [31615, 34260, null], [34260, 37012, null], [37012, 38643, null], [38643, 41078, null], [41078, 41494, null], [41494, 43428, null], [43428, 45941, null], [45941, 48568, null], [48568, 51576, null], [51576, 52419, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52419, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52419, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2259, 2], [2259, 4620, 3], [4620, 6911, 4], [6911, 9672, 5], [9672, 10577, 6], [10577, 12745, 7], [12745, 15298, 8], [15298, 16157, 9], [16157, 18681, 10], [18681, 21017, 11], [21017, 21056, 12], [21056, 23615, 13], [23615, 24085, 14], [24085, 26623, 15], [26623, 29315, 16], [29315, 31615, 17], [31615, 34260, 18], [34260, 37012, 19], [37012, 38643, 20], [38643, 41078, 21], [41078, 41494, 22], [41494, 43428, 23], [43428, 45941, 24], [45941, 48568, 25], [48568, 51576, 26], [51576, 52419, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52419, 0.09605]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6bc3867f63444422825ca044f4d5b90322129922
|
Latent semantic analysis of game models using LSTMs
Ghica, Dan R.; Alyahya, Khulood
DOI:
10.1016/j.jlamp.2019.04.003
License:
Creative Commons: Attribution-NonCommercial-NoDerivs (CC BY-NC-ND)
Document Version
Peer reviewed version
Citation for published version (Harvard):
Link to publication on Research at Birmingham portal
General rights
Unless a licence is specified above, all rights (including copyright and moral rights) in this document are retained by the authors and/or the copyright holders. The express permission of the copyright holder must be obtained for any use of this material other than for purposes permitted by law.
Users may freely distribute the URL that is used to identify this publication.
Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research.
Users may use extracts from the document in line with the concept of 'fair dealing' under the Copyright, Designs and Patents Act 1988 (?).
Users may not further distribute the material nor use it for the purposes of commercial gain.
Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document.
When citing, please reference the published version.
Take down policy
While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive.
If you believe that this is the case for this document, please contact UBIRA@lists.bham.ac.uk providing details and we will remove access to the work immediately and investigate.
Download date: 02. Dec. 2022
Latent semantic analysis of game models using LSTM
Dan R. Ghica*
University of Birmingham, UK
Khulood Alyahya*
University of Exeter, UK
King Saud University, Riyadh, KSA
*Corresponding author
Preprint submitted to Elsevier
April 7, 2019
Latent semantic analysis of game models using LSTM
Dan R. Ghica*
*University of Birmingham, UK
Khulood Alyahya*
*University of Exeter, UK
King Saud University, Riyadh, KSA
Abstract
We are proposing a method for identifying whether the observed behaviour of a function at an interface is consistent with the typical behaviour of a particular programming language. This is a challenging problem with significant potential applications such as in security (intrusion detection) or compiler optimisation (profiling). To represent behaviour we use game semantics, a powerful method of semantic analysis for programming languages. It gives mathematically accurate models (‘fully abstract’) for a wide variety of programming languages. Game-semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using LSTM, a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol.
*Corresponding author
April 7, 2019
1. Programming languages and machine learning
Software systems often consist of many components interacting via APIs, which can be internal to the language (e.g., libraries, modules) or external (e.g., Web APIs). The final process in producing such a system is usually the “linking” of several object files into one (or several) binary executable(s). Since the linker does not have access to the source files it is a reasonable, and very difficult, question to ask whether the object code in those files originates from an assumed programming language via correct compilation. This is an important question to ask in many contexts: compiler correctness, compiler optimisation, tamper-proofing, intrusion detection, and more. In this paper we propose a simple black-box approach to answering this question, based on game semantics and machine learning.
Programming language semantics, the way we ascribe meaning to programming languages, comes in different flavours. There is the operational approach, which consists of a collection of effective syntactic transformations that describes the execution of the program in a machine-independent way (see [39] for a tutorial introduction). There is also the denotational approach, in which subprograms (terms) are interpreted, compositionally on syntax, as objects in a mathematical semantic domain (see [44] for an introduction). The two approaches are complementary, and both have been studied extensively. Most commonly, especially for most ‘real life’ programming languages, there is another, ad hoc, approach of specifying a language, through a compiler, often informally described in a ‘standard’.
Relating operational and denotational models is a mathematically difficult but worthwhile endeavour. Term equality is operationally defined in a way which is almost unworkable in practice: contextual equivalence. By contrast, term equality in the denotational model is just equality of the denoted mathematical objects. As a result, denotational models are presumably handy in
applications where equality of terms is important, such as compiler optimisations. When contextual equivalence coincides with semantic equality the model is said to be fully abstract, a gold standard of precision for a denotational model. Constructing fully abstract denotational models even for relatively simple higher order (PCF [42]) or procedural (Algol [36]) languages turned out to be a difficult problem, extensively studied in the 1990s. Many interesting semantic developments emerged out of this concerted effort, including game semantics, a technique which finally gave the first such fully abstract models first for PCF [1] and Algol [3] then for many other programming languages [16].
Relating any mathematical (operational or denotational) model to the de facto ‘model’ which is the compiler is a much different proposition. Whereas constructing a compiler from a mathematical specification is an arduous but achievable task, what we want to consider is the converse question. Given a compiler, could we, at least in principle, construct a semantic model of the language? What is the right avenue of attack for this daunting problem?
A compiler is in some sense a formal specification. However, the compiler as a specification does not help us reason about basic properties of terms, such as contextual equivalence. How can we extract a more conventional kind of semantics? On the face of it, the question may seem preposterous at worst, unanswerable at best.
Operational semantics (OS), the workhorse of much applied programming language theory, seems an unsuitable candidate for this job. Much like structural models of natural language, the rules of OS have a syntactic intricacy which cannot be hoped to be reconstructed from behavioural observations. Noting that recent progress has been achieved in learning the structural semantics of natural languages [23], operational semantics of programming languages cannot take advantage of these methods. For example the basic beta-reduction rule present in some form in all functional languages requires a complex form of substitution which assumes the concepts of binder, free variable and alpha equivalence. Denotational semantics on the other hand seems a more plausible candidate because of its independence of syntax. A final key observation is that some de-
notational models can be mathematically elementary. This is true of trace-like models in general [14] and game semantic models [5] in particular. In fact one can think of game semantics as compositional trace models suitable for higher order programming languages. This seems to give us a foothold in attacking the problem. If a model can be specified simply as a set of traces subject to combinatorial constraints, perhaps such models can be machine-learned using techniques that proved successful in the learning of natural languages.
For tutorials and surveys of game semantics the reader is referred to the literature [4, 16]. The basic elements of a game semantics are moves, with a structure called an arena. Arenas are determined by the type signature of the term and consist of all the possible interactions (calls and returns) between a term and its context. Sequences of interactions are called plays and they characterise particular executions-in-context. Finally, terms are modelled by sets of plays called strategies, denoting all possible ways in which a term can interact with its context.
Certainly, not all interactions are possible, so plays are constrained by legality conditions. Conversely, strategies are subject to certain closure conditions, such as prefix-closure, stipulating that if certain plays are included so must be other ones. Because all features of a game semantic model are combinatorial properties of sequences (plays) or sets of sequences (strategies), using machine learning to identify them is no longer a preposterous proposition. The question certainly remains whether these properties can be learned and how accurately.
In this paper we present two sets of computational experiments focussing on the learnability of known game semantic models of two similar programming languages. We first look at the intrinsic learnability of the language, automatically creating models from positive examples of legal plays, tested against sets of plays which are slightly modified so that to become illegal. The second experiment uses the learned models to perform latent semantic analysis [30] on the two languages, attempting to determine the provenance of a set of legal plays. These experiments are repeated both for a precise and approximate representation of the game model.
To learn the model we use neural networks, more precisely long short-term memory neural nets [26] (LSTM), which proved to be highly successful in automated translation [45] and text synthesis [13]. The results are surprisingly good, with the trained net being able to reliably discriminate both between legal and illegal plays, and between legal plays from two slightly different programming languages. Moreover, the neural net had a standard architecture and, relative to LSTMs used in natural language processing, was quite small. Training converged rapidly, requiring relatively modest computational resources.
These positive results should be received with cautious optimism. Methodologically, a strong case can be made that game semantics gives a possible angle of attack on the machine-learning problem for programming languages, compared to operational or other denotational programming language semantics (e.g. domain-theoretic [2]). Moreover, it appears that the algorithmically complex combinatorial patterns which characterise the legality of game models are learnable enough to be able to reliably distinguish between legal plays and plays with small illegal irregularities and between plays belonging to slightly different languages.
Of course, the resulting model is opaque and cannot serve as a basis for true understanding of a language, but it could be the starting point of a deeper automation of certain programming language processes which require an effective, even if opaque, semantic model to distinguish between legal (possible) behaviour and illegal (impossible) behaviour. Activities such as testing, fuzzing, or compiler optimisations fall within this broad range.
A possible objection to our approach is that we generate training data sets from a known semantic model, whereas our stated initial problem referred to languages ‘in the wild’ which have no such models. To respond, there is no substantial difference between generated plays from a known model and collecting interaction traces from instrumented real code, using some form of time-stamped profiling – that is the consequence of the full abstraction result for the model. But this process is far more laborious than producing the traces from a known model. A known model has other advantages compared to using unknown code.
Models of IA, both sequential and concurrent, have been studied algorithmically and are known to be complex, so the learning problem is non-trivial [34, 19]. For a controlled experiment in learnability ours is a suitable methodology, and the results indicate that applying the technique to real languages has potential.
2. Idealised Algol (IA)
For the sake of a focussed presentation we shall look at two variations on the programming language Idealised Algol (IA) [43]. IA is suitable for this experiment for several reasons. To begin with, it is a family of well-studied programming languages having at their core an elegant fusion of functional and imperative programming. We will concentrate in particular on two members of this family, Abramsky and McCusker’s version of sequential IA [3] and Ghica and Murawski’s version of concurrent IA [20]. Both these languages have mathematically precise (fully abstract) game semantics which have an underlying common structure which makes it possible, but not trivial, to compare them. Finally, from a pragmatic point of view, the models themselves are elegant, can be presented concisely, and lend themselves well to computational experiments.
IA has basic data types, such as integers and booleans, with which three kinds of ground data types are constructed: commands (unit), local variables (references) and expressions. Function types are uniformly created out of these ground types. The terms of the language are those common in functional (abstraction and application, recursion, if expressions, arithmetic and logic) and imperative (local variables, assignment, de-referencing, sequencing, iteration). A peculiarity of IA, which sets it apart from most commonly encountered programming languages is the fact that it uses a call-by-name mechanism for function application [41]. For technical reasons, the IA we study here allows side-effects in expressions and admits a general variable constructor in which reading and writing to a variable can be arbitrarily overloaded. Concurrent IA, as described here, uses the same types plus a new type for binary semaphores, along with new
terms for parallel execution of commands and semaphore manipulation. Both languages have the type system of the simply-typed lambda calculus, with all language constants definable as (possibly higher-order) constants.
2.1. Game Semantics
In game semantics the element of interaction between a term-in-context and the context is called a move. Interactions characterising any particular execution are called plays. All possible interactions with all possible contexts are called strategies.
Moves happen in arenas, mathematical structures which define the basic causal structures relating such actions.
Definition 1 (Arena). An arena $A$ is a set $M$ equipped with a function $\lambda : M \rightarrow \{o,p\} \times \{q,a\}$ assigning each move four possible polarities and a relation $\vdash \subseteq M \times M$ called enabling.
The four polarities are opponent/proponent and question/answer.
Arenas are used to give interpretation to types.
Definition 2 (Base arenas). The arenas for unit and boolean are:
\[
\begin{align*}
\text{unit} & : M = \{q,a\}, \quad \lambda = \{(q,\text{oq}),(a,\text{pa})\}, \quad \vdash = \{(q,a)\} \\
\text{bool} & : M = \{q,t,f\}, \quad \lambda = \{(q,\text{oq}),(t,\text{pa}),(f,\text{pa})\}, \quad \vdash = \{(q,t),(q,f)\}.
\end{align*}
\]
The significance of the question $q$ is that a computation is initiated and of the answer $a$ (or, respectively $t$ or $f$) is that a result is produced. The enabling relation establishes that the answer must be justified by the asking of the question. The interpretation of the opponent/proponent polarity is that ‘proponent’ moves are initiated by the term whereas ‘opponent’ moves by the context. As we can see, for computation at base types the computation is initiated via questions asked by the opponent, i.e. the context, and terminated via answers provided by the proponent, i.e. the term.
Definition 3 (Initial move). The set of moves without an enabler are called initial moves.
In both arenas above the set of initial moves is \( I = \{ q \} \). Arenas with multiple initial moves correspond to product formation, where the two initial questions correspond to computing the two projections.
From the basic arenas, composite arenas may be created, for example for function type \( A \Rightarrow B \) from arenas for \( A \) and \( B \) is defined as
**Definition 4** (Composite arena),
\[
A = \langle M_A, \lambda_A, \vdash_A \rangle \\
B = \langle M_B, \lambda_B, \vdash_B \rangle \\
A \Rightarrow B = \langle M_A \uplus M_B, \lambda^*_A \uplus \lambda_B, \vdash_A \uplus \vdash_B \uplus (I_B \times I_A) \rangle,
\]
The function \( \lambda^* \) is only \( \lambda \) but with the \( o \) and \( p \) polarities reversed. The significance of this polarity reversal is that in the case of arguments to a function the term/context polarity of the interaction becomes reversed. Enabling for function arenas relates not only moves in the two component arenas, but also each initial move in the argument \( A \) to each initial move in the return type \( B \), indicating that arguments may be invoked only after the function as a whole has started executing.
For a term in context with type judgement \( x_1 : A_1, \ldots, x_k : A_k \vdash M : A \), the arena in which it is interpreted is \( A_1 \Rightarrow \cdots \Rightarrow A_k \Rightarrow A \).
An interaction corresponding to an execution run of a term-in-context is called a *play*, and it is a sequence of *pointed moves* subject to correctness conditions which will be discussed later. A pointed move is an arena-move equipped with two *names* (in the sense of [40]), the first one representing its ‘address’ in the sequence and the second one is ‘the pointer’, i.e. the address of an enabling arena-move which occurs earlier in the sequence [15].
**Example 1.** The typical play in the interpretation of sequential composition \( \text{seq} : \text{unit}_3 \rightarrow \text{unit}_2 \rightarrow \text{unit}_1 \) is
\[
q_1 n_1 \ast q_3 n_2 n_1 \cdot a_3 n_3 n_2 \cdot q_2 n_4 n_1 \cdot a_2 n_5 n_4 \cdot a_1 n_6 n_1.
\]
This sequence of actions is explained as follows: start computation \((q_1)\), ask first argument \((q_3, \text{justified by initial question } n_1)\), receive result \((a_3, \text{justified by ...)}\).
preceding question), ask second argument \( (q_2, \text{justified also by initial question } n_1) \), receive result \( (a_2, \text{justified by preceding question}) \), indicate termination \( (a_1, \text{justified by initial question } n_1) \). Pointers are usually represented diagrammatically by drawing an edge between moves with equal pointer names, then eliding the pointer names:
\[
\begin{array}{cccccc}
q_1 & q_3 & a_3 & q_2 & a_2 & a_1
\end{array}
\]
Because the actual correctness conditions for plays are language-specific we will present them separately for sequential and concurrent IA, bearing in mind that everything up to this point is shared by the two.
2.2. Plays for sequential IA
Given a justified sequence \( s \) in an arena \( A \) the notion of player and opponent view are defined by induction as follows:
**Definition 5 (View).**
\[
\begin{align*}
\text{pview} (\epsilon) &= \epsilon \\
\text{pview} (s \cdot mn') &= \text{pview}(s) \cdot mn' & \text{when } (\pi_1 \circ \lambda)(m) = p \\
\text{pview}(s \cdot mn_1n_2 \cdot s' \cdot m'n'_1n_1) &= \text{pview}(s) \cdot mn_1n_2 \cdot m'n'_1n_1 & \text{when } (\pi_1 \circ \lambda)(m') = o \\
\text{pview} (s \cdot mn\star) &= mn\star
\end{align*}
\]
\[
\begin{align*}
\text{oview} (\epsilon) &= \epsilon \\
\text{oview} (s \cdot mnn') &= \text{oview}(s) \cdot mnn' & \text{when } (\pi_1 \circ \lambda)(m) = o \\
\text{oview}(s \cdot mn_1n_2 \cdot s' \cdot m'n'_1n_1) &= \text{oview}(s) \cdot mn_1n_2 \cdot m'n'_1n_1 & \text{when } (\pi_1 \circ \lambda)(m') = p
\end{align*}
\]
The view of a sequence is related to the stack discipline of computation in sequential IA, where certain actions, although present in the interaction traces, are temporarily ‘hidden’ by other actions.
Legal plays are sequences subject to certain combinatorial conditions which capture the extent of possible behaviour in a given language.
**Definition 6** (IA-legal play). A **justified sequence is an IA-legal play if it is:**
alternating: the proponent/opponent polarities of consecutive moves are different,
well-bracketed: only the most recently unanswered question in a sequence can be answered,
P- and O-visible: a proponent (opponent, respectively) move must have a justifier in the proponent (opponent, respectively) view of the preceding sequence
Violating the alternation condition means that successive $p$-moves or $o$-moves occur. The bracketing condition can be violated when questions are answered in the wrong order or multiple times:
```
q_1 \rightarrow q_3 \rightarrow q_4 \rightarrow d_2 \rightarrow a_1
```
Finally, the sequence below shows a violation of the visibility condition, for opponent:
```
q_1 \rightarrow q_2 \rightarrow q_3 \rightarrow q_2 \rightarrow q_3
```
2.3. **Plays for Idealised Concurrent Algol (ICA)**
It is a general, and somewhat surprising, feature of game semantics that richer languages have simpler models. This is not as strange as it seems, because the more features a language has the more unrestricted its interaction with the context can be. In fact it is possible to think of ‘omnipotent’ contexts in which the interactions are not constrained combinatorially [22]. When sequential IA is enriched with parallelism, the alternation constraint disappears and bracketing and visibility are relaxed to the following, more general constraints:
Definition 7 (ICA-legal play). A justified sequence is an ICA-legal play if it is:
**forking**: In any sequence \( s \cdot qn_1 n_1' \cdot s' \cdot mn_2 n_1 \) the question \( q \) must be pending,
**joining**: In any sequence \( s \cdot qn_1 n_1' \cdot s' \cdot an_2 n_1 \) all questions justified by \( q \) must be answered.
The idea is that a ‘live thread’ signified by a pending question can start new threads (justify new questions) so long as it is not terminated (answered). Conversely, a thread can be terminated (the question can be answered) only after the threads it has started have also terminated. The simplest sequences that violate **fork** and **join**, respectively, are:
\[
\begin{align*}
q_1 & \quad a_1 & \quad q_2 \\
q_1 & \quad q_2 & \quad a_1
\end{align*}
\]
Example 2. The typical play in the interpretation of parallel composition \( \text{par} : \text{unit}_3 \rightarrow \text{unit}_2 \rightarrow \text{unit}_1 \) is
\[
q_1 n_1 \star \cdot q_3 n_2 n_1 \cdot q_2 n_4 n_1 \cdot a_3 n_3 n_2 \cdot a_2 n_5 n_4 \cdot a_1 n_6 n_1.
\]
This sequence of actions is interpreted as: start computation \((q_1)\), ask first argument \((q_3, \text{justified by initial question } n_1)\), immediately ask second argument \((q_2, \text{justified also by initial question } n_1)\), receive the results in some order \((a_3, \text{justified by } n_1, \text{and } a_2, \text{justified by } n_4)\), indicate termination \((a_1, \text{justified by initial question } n_1)\). The play is represented diagrammatically as in Fig. 1:
\[
\begin{align*}
q_1 & \quad q_3 & \quad q_2 & \quad a_3 & \quad a_2 & \quad a_1
\end{align*}
\]
Figure 1: Legal play with pointers
2.4. Strategies
As we mentioned, plays characterise an interaction between term and context occurring in a particular run. In order to characterise the term we take the set of all such possible interactions, noting that they are also characterised by various closure conditions. They all share prefix-closure as a common feature, typical to all trace-like models. In the case of sequential IA, the strategies are required to be deterministic whereas in the case of concurrent IA they must be closed under certain permutations of moves in plays. It is strategies which give the fully abstract model of the language.
We are not going to give the detailed definitions here because we shall focus on the learning of plays, rather than strategies. Learning strategies seems a more difficult proposition, which we will shall leave for future work.
2.5. Algorithmic considerations
Compared to the complexity of the syntax, the formal rules describing legality of behaviours in the language in terms of combinatorial properties of pointer sequences are remarkably succinct: just three rules for sequential IA (alternation, bracketing, and visibility) and two for concurrent IA (fork and join). A reasonable question to ask is whether these sets of sequences, taken as formal languages, are computationally complex or simple.
It turns out that the answer depends on the order of the arena, where ground type is order 0 and an arena $A \Rightarrow B$ is the maximum between the order of $A$ plus one and the order of $B$. Plays in sequential IA defined in arenas of order up to 2 are regular languages, definable in terms of finite state automata [18], and for arenas of order up to 3 they are context-free languages, definable in terms of push-down automata [38]. Beyond this, strategies form undecidable languages [35]. In the case of concurrent IA, games in arenas of order 2 or more are undecidable [21].
Of course, the results above refer to questions of language equivalence. Checking whether a (finite) sequence is a valid play in the models of sequential
or concurrent IA is always decidable. But the results above suggest that the problem of learning such models is computationally challenging.
3. Learnability of IA models
We will evaluate the learnability of sequential and concurrent IA using latent semantic analysis. First, a type signature is chosen, which determines an arena. Then a neural network is trained with random plays of the arena so that the level of perplexity exhibited by the model is minimised. Using perplexity as a measure of accuracy is common in natural language processing. Given a probability model \( Q \), one can evaluate it by how well it predicts a separate test sample \( x_1, \ldots, x_N \) from a sample \( P \).
**Definition 8** (Perplexity). The perplexity of the model is defined by:
\[
\Psi = 2^{-\frac{1}{N} \sum_{i=1}^{N} \log_2 Q(x_i)}.
\]
The concept of perplexity of a probability distribution is the established measure of quality for a natural language model, and it is the exponentiation of the (cross)entropy of the model. The reasons for using perplexity rather than entropy are largely related to the history and culture of the discipline.
Better models have lower perplexity, as they are less ‘perplexed’ by the sample. In natural language processing, the perplexity of large corpora (1 million words) is of around 250 (per word). The exponent in the definition of perplexity (the cross-entropy) indicates how many bits are required to represent the sequence in the word. For high quality natural language corpora, the cross-entropy is around 8 bits/word or 1.75 bits/letter [9]. The models are validated by computing the perplexity of a model against a different random sample of correct plays coming from the same language, over the same arena. A successful learning model will exhibit similar perplexities between the training set and the validation set.
The accuracy of the learned model is then tested in two ways. The first test is to expose the model to a new sample, coming from the same programming
language but perturbed using several single-character edits (insertions, deletions or substitutions) applied randomly to each sequence. The number of such edits is known as “Levenshtein distance”. This results in a set of plays at a small normalised Levenshtein distance from the correct plays which were used for training. Concretely, we use a distance of up to 0.1, e.g. 5 random edits applied to a sequence of length 50. In order for the test to be successful we expect to see a significant increase in perplexity between the test set as compared against the validation and training sets.
The second test is to expose the model to a sample of correct plays coming from the other language, i.e. testing the sequential model against concurrent plays and *vice versa*. Noting that each sequential program is a particular (degenerate) form of a concurrent program, we expect the concurrently-trained neural network to exhibit similar levels of perplexity when exposed to the test data set and the validation data set, but we expect the sequentially-trained program to exhibit greater perplexity when exposed to the test data set — since obviously there are concurrent plays which have no sequential counter-part.
Game models are determined by the arena in which they happen. As discussed in Sec. 2.5, the order of the arena has a significant impact on the algorithmic complexity of the model. We would expect games in low-order arenas to be faster to learn than games in higher-order arenas, but it is difficult to guess the effect the arena shape has over the accuracy of the model. As a consequence we examine both ‘narrow’ and ‘wide’ arenas. If we visualise and arena as a tree, the order is the height. The width of the arena corresponds to the number of arguments a function takes, and determines the number of distinct moves in its vocabulary of symbols. Below we show several arenas, depicted as trees:
The arenas above have types \((\text{unit} \Rightarrow \text{unit}) \Rightarrow \text{unit}\) (order 2, width 1), \(\text{unit} \Rightarrow \text{unit}\) (order 1, width 2) and, respectively, \((\text{unit} \Rightarrow \text{unit}) \Rightarrow (\text{unit} \Rightarrow \text{unit}) \Rightarrow \text{unit}\) (order 2, width 2).
The total number of moves is of the order \(O(w^o)\) where \(w\) is the width of the arena and \(o\) the order of the arena. From the point of view of the term modelled, the width corresponds to the number of free variables in the term or the number of arguments a function might have, whereas the height is the order of the type of the term. We will conduct the experiment on arenas of orders 1 to 3. Going beyond 3 seems rather irrelevant as functions of order 4 or higher are rarely used in practice. We will conduct the experiment on arenas of width 1, i.e. functions taking one argument, in order to emphasise the complexity of the model as caused by higher-order features, and on arenas of width 5, i.e. functions with relatively large numbers of arguments. The two will be contrasted and compared.
The corpora of plays we are creating consists of plays which are abstracted in two ways. The first simplification is that we replace all ground types with the \(\text{unit}\) type. Indeed, in the legality rules for plays, sequential or concurrent, values play no role, and they can be safely abstracted by a generic notion of answer-move. This is important, because the presence of integers in plays would explode the vocabulary of moves beyond what is manageable. The second simplification is eliding the pointer information and focussing on sequences of moves only. By eliminating pointers we make the model easier to learn, but we simultaneously make it less powerful since the pointer information is lost. The reason for removing the pointers is similar to that of removing values: they are usually represented by integers, and the presence of integers in traces may increase too much both the size of the vocabulary and the length of the sequences. However, eliding pointer information is a common abstraction in games-based static analysis of programs, so studying its impact on learnability seems relevant [17].
The length of random plays used in learning, validation and testing is at most 50. The size of the corpus of random sequences used for learning is 10,000 and 100,000, and the size of the corpora used for validation and testing are of
10,000 sequences. These parameters are arbitrary and not too important. Since
the sequences are generated there are no limits on maximal sequence length or
corpus size. Keeping in mind that the length of a play represents the number
of function-argument interactions, a size of 50 seems generous. The number
of plays in the corpora only impacts accuracy (which is already very good, as
it will be seen) and the duration of the training process (which is reasonable,
as it will be seen). For learning we use LSTMs, briefly described in Sec. 5.1.
The details of the implementation and the (hyper)parameters of the model are
discussed in the next section.
3.1. Latent semantic analysis of plays with justification pointers
Pointers raise an additional problem in that the concrete representation mat-
ters, and it may clutter the learning process with extraneous information. For
example, the sequence $qan' \cdot an'n'$ (diagrammatically, $\overset{q}{\longrightarrow} \overset{a}{\longrightarrow}$) can be con-
cretely represented, using integers for names, as $q0a12$, but also equivalently as
$q10a03$. For learning we must consider a different representation which is nei-
ther the original, based on absolute sequence indices [27], nor the representation
based on using names for pointers [15].
Absolute indices are not invariant under concatenation, so the same combi-
natorial patterns occurring earlier or later in the sequence will involve different
numerical values, which is generally difficult to learn. Using names seems even
harder because, via alpha equivalence, any play can have many equivalent but
distinct representations. Whether alpha equivalence can be learned is an inter-
esting but different question. A representation which seems suitable is to use
relative indices to indicate the offsets of the pointers, in the stile of de Bruijn
indices. So the sequence above will be concretely represented as $q0q1$. The more
complex play in Fig. 1 is represented as $q_10q_31q_22a_32a_22a_15$.
The computational experiments consist of creating LSTM models, via learn-
ing, from sets of plays belonging to given arenas, from the two languages, con-
current and sequential variants of IA. We call the former concurrent pointer
models (CPMs) and the latter sequential pointer models (SPMs).
We are then using the learned models to perform latent semantic analysis by measuring the perplexity of the model for a new set of plays generated by both concurrent and sequential variants, in the same arena. We use independently generated random sets of plays from the same model to validate the data and random sets of plays for a different model to test the data.
Figs. 2-3 show the perplexity of using the model trained on sequential pointer plays to analyse concurrent pointer plays as the third bar in each diagram. Indeed, the latent semantic analysis is conclusive. The validation data, consisting of a different set of random plays from the sequential model, elicits the same perplexity from the model as the training data, indicating concordance of the languages, and more than 4 orders of magnitude lower than the testing data. There is no obvious benefit, or indeed drawback in terms of precision, from increasing the size of the corpora tenfold, from 10k to 100k.
Conversely, as seen in Figs. 4-5, we may use the concurrently-trained model to analyse sequential plays. Since behaviourally sequential plays can be always found as a subset among the concurrent plays, we expect the perplexity differences between validation and testing to be significantly smaller, and indeed they are. However, there is an observable difference between the two, with testing perplexity being up to 100 times higher than validation perplexity.
How can this be explained if the testing set is a subset of the training set? A possible explanation is that the distribution of sequential plays among the concurrent plays is relatively rare. The number of possible interleavings of a play in the concurrent language grows very fast, faster than exponentially in the length of the sequence. Out of all possible interleavings, precisely one is the sequential interleaving. This means that the exposure of the neural network during training to sequential plays is likely to be negligible, which is consistent with a higher perplexity for the relatively rare sequential plays. Should we interpret this as a failure of the latent semantic analysis? The answer to this question depends on our aim. From the point of view of a formal language analysis we can consider this as a failure because, formally, sequential plays are included in concurrent plays. However, if we are interested in sequentiality as an
Figure 2: Latent semantic analysis of concurrent plays in SPMs (10k plays)
(a) Width 1, \( \text{perp} < 10^5 \)
(b) Width 5, \( \text{perp} < 10^6 \)
Figure 3: Latent semantic analysis of concurrent plays SPMs (100k plays)
(a) Width 1, \( \text{perp} < 25 \)
(b) Width 5, \( \text{perp} < 250 \)
Figure 4: Latent semantic analysis of sequential plays in CPMs (10k plays)
(a) Width 1, \( \text{perp} < 80 \)
(b) Width 5, \( \text{perp} < 400 \)
Figure 5: Latent semantic analysis of sequential plays CPMs (100k plays)
idiomatic concurrency then the result is a surprising success, and a success that would be difficult to achieve using more conventional automata-based methods. Because just like British English (cf. sequential programming) is a dialect of English (cf. concurrent programming), it does not mean that British English text is not, or should be not, recognisable as such.
3.2. Latent semantic analysis of pointer-free plays
We have run the same experiment, but this time we used an abstracted representation of plays in which the pointer information has been deleted. We call these models sequential pointer-free models (SPFMs) and, respectively, concurrent pointer-free models (CPFMs).
Fig. 6 and 7 show the perplexity (third bar) of testing a sequential model on concurrent plays over the same arena. In this case the evidence is overwhelming, from 2-5 orders of magnitude in perplexity increase. The extra training provided by using 100k samples is not significant. We conclude that some precision has been lost, since the difference in perplexity between testing data and validation data is now smaller. However, even reduced, the perplexity of the testing data can result in an unambiguous classification.
We again use the concurrent model to test sequential plays (Fig. 8-9), this time for pointer-free representation. The experiment shows, as expected the concurrent model cannot identify sequential plays, since all sequential plays can be again found in the concurrent model. As in the case of the pointer model, the perplexity of the test set is not as low as that in the validation set. However, in comparison to the pointer model (Figs. 4-5) the differences in perplexity are negligible, indicating that the pointer information was useful in identifying the concurrent behaviour as idiomatic.
3.3. Detecting perturbations in pointer-free plays
In the case of the pointer-free representation we further test the robustness of the learned model by using test plays from the same model, but intentionally randomly perturbed at a fixed normalised Levenshtein distance ($\delta \leq 0.1$). The
Figure 6: Latent semantic analysis of concurrent plays in SPFMs (10,000 plays)
Figure 7: Latent semantic analysis of concurrent plays in SPFMs (100,000 plays)
Figure 8: Latent semantic analysis of sequential plays in CPFMs (10,000 plays)
Figure 9: Latent semantic analysis of sequential plays in CPFMs (100,000 plays)
results are in Fig. 10 for sequential models trained on 10K samples and in Fig. 11 on models trained on 100K samples, and in Fig. 12-13 for concurrent models. Each bar chart contains arenas of order 1-3. Where data is missing is because our hardware computational resources (memory) could not cope with the size of the model.
We note that in both the case of sequential and concurrent models the model is significantly more accurate when learned from 100k samples, rather than 10k samples. In absolute terms, the perplexity of the 100k samples models ranges from single digits to just over 50. However, the absolute perplexity is not relevant in latent semantic analysis, just the relative difference in perplexity between training, validation, and test data. Even in the cases with the weakest discrimination (Fig. 13) the perplexity of the test data is almost 2 times larger than that of the validation data.
3.4. Implementation notes
We are using the standard implementation of LSTM distributed with TensorFlow\(^1\). The model uses an LSTM cell which processes moves sequentially, computing probabilities for possible values of the next move in the sequence. The memory state is initially zeroes, updated after each word. Ideally, in a recurrent neural net (RNN), the output depends on arbitrarily distant inputs. However, this makes the training process computationally intractable, so it is common in practice to ‘unfold’ the net a fixed number of steps; in the concrete case of our model this value is 20. The inputs are represented using a dense embedding. This is considered undesirable for text but it is demanded here by the large size of the symbol set [6]. The loss function for the model is the sample perplexity, discussed earlier. To increase the expressive power of the model, two LSTMs are layered, each containing 200 nodes. This is considered a small LSTM model.
The training cycle consists of several (13) cycles of training (“epochs”), al-
\(^1\)https://github.com/tensorflow/models
Figure 10: Latent semantic analysis of perturbed SPFMs (10,000 plays)
(a) Width 1, \(\text{perp} < 9\)
(b) Width 5, \(\text{perp} < 45\)
Figure 11: Latent semantic analysis of perturbed SPFMs (100,000 plays)
(a) Width 1, \(\text{perp} < 15\)
(b) Width 5, \(\text{perp} < 40\)
Figure 12: Latent semantic analysis of perturbed CPFMs (10,000 plays)
(a) Width 1, \(\text{perp} < 55\)
(b) Width 5, \(\text{perp} < 450\)
Figure 13: Latent semantic analysis of perturbed CPFMs (100,000 plays)
(a) Width 1, \(\text{perp} < 30\)
(b) Width 5, \(\text{perp} < 250\)
though in almost all cases except the largest arenas, the model converges after only 1-2 epochs. Further training leads to little or no improvement in the model, as seen in Fig. 14, which is a typical example. The experiments were carried out on a mid-range CUDA device, GeForce GTX 960. The training cycle for each model was around one hour.
4. The challenge of nominal features
In this section we will give some negative results, less successful experiments attempting to apply neural network learning to nominal patterns.
One of the most difficult conceptual, mathematical, and algorithmic features of games are justification pointers. We examined both the learnability of plays with pointers, and of plays with pointers abstracted away. In the case of plays with pointers we chose a novel representation in which the pointer indicates the offset between the justifier and the justified moves. This representation would be awkward for mathematical proofs but it seemed appealing for learning as it is translation invariant. This means that a particular sub-sequence would have the same representation whether it occurs earlier or later in a sequence, which would not be the case if pointers were absolute indices, as used in the original
Hyland-Ong paper [27]. From a mathematical point of view, however, the most convenient representation of pointers is using atoms, in the context of nominal set theory [15].
In nominal set theory the essential property is equivariance, which is closure over uniform changes of atoms, which represent names. A strategy is in this setting an equivariant set of sequences, which means it is closed under name permutations.
**Definition 9 (Equivariant strategy).** Let $\pi \cdot p$ as the permutation action of a bijection $\pi : \mathcal{A} \rightarrow \mathcal{A}$ on a sequence $p$. For any play $p \in \sigma$, $\pi \cdot p \in \sigma$ is also a play.
Names can be concretely represented by any discrete set such as natural numbers or strings. For the purpose of learning we will approximate it with a large finite set, let us say natural numbers less than some $N$. This means that even the simplest plays, $a^n \cdot a^n'$ (diagrammatically, $\begin{array}{c} q \end{array} \begin{array}{c} a \end{array} \begin{array}{c} n \end{array} \cdot \begin{array}{c} a \end{array} \begin{array}{c} a \end{array} \begin{array}{c} n \end{array}$) can have $N \times N$ distinct representations, for any $n \neq n' \leq N$. This seems exceedingly demanding for the powers of generalisation of a neural net. And indeed, the results are order of magnitude worse if using the nominal representation. For example, in the simple case of sequential games of order 1 and width 1 the perplexity of the model increases from sub-unitary to 22.5.
The poor learnability of the nominal representation inspired us to ask a related question which, in some sense, represents a lowering of the bar. Pointer-plays are complex combinatorial structures. However, can much simpler equivariant structures be learned? We fixed on the pattern $abab \in \mathcal{A}^4$ as a short, fixed, simple such pattern. This problem seems both easier, as the pattern is short and fixed, and harder, as the pattern is purely nominal. As it turns out the nominal challenge dominates the combinatorial simplification. On the same neural architecture, with the same parameters as above, and a set of names fixed at size $N = 10^5$ the performance of the net was very poor, as indicated in Fig. 15. We can see that the evolution of the perplexity is non-monotonic, which usually indicates an unusually rugged landscape of the loss function with
many local extrema, and an extreme high perplexity \((3 \times 10^5)\) for validation and testing in contrast to the training set, which indicates a memorisation of the training set rather than genuine learning.
The poor performance of the neural net to learn equivariant patterns was confirmed by independent experiments \([46]\) carried out using a feed-forward neural network using various hidden-layer configurations attempting to learn the same equivariant pattern \(abab\). When the set of atoms was large enough \((N > 2^{10})\) to prevent the network from simply memorising all instances of the pattern the precision of recognition dropped under 60%, little better than random guessing.
The difficulty of learning equivariant patterns is perhaps best illustrated by an even simpler experiment\(^2\). Using the same settings (a feed-forward 4-layer network with 6 and respectively 2 neurons in the middle layer, activated using the hyperbolic tangent function) we can compare the success of learning a line segment in a finite two-dimensional space versus a partition of the same space. A line is a basic equivariant pattern, a representation of \(\{(x, x) \mid x \in [-1, 1]\}\) whereas a partition \([0, 1] \times [-1, 1]\) is not. The results are seen in Fig. 16, which shows both the training data and the resulting model as a classification of the entire input space. The partition is learned almost perfectly, whereas only a
\(^2\)Using ConvNetJS, \url{https://cs.stanford.edu/people/karpathy/convnetjs/}
gross approximation of the line is produced. For the discretisation used by the model, the ideal line has a width of 0.25% of the size of the input space, whereas the rough approximation in the figure has error $7.29\% \leq \epsilon \leq 17.51\%$ relative to the size of the input space. In contrast, for the partition the error at the boundary is mostly within the discretisation margin.
It is of course difficult to conclusively assert that a particular feature is not learnable by a neural network, particularly as they inhabit an infinitely large space of configurations. This is not to say that other ML techniques cannot prove successful at learning nominal features, and in fact the symmetries of nominal languages can be used to make the learning more effective [28]. What we can say is only that the same methods that produce remarkably good results for non-nominal representations fail to produce similar results on nominal (equivariant) representations.
5. Conclusion, related and further work
5.1. Recurrent neural nets
A perceptron is a simple computational element from a vector of real numbers to real numbers, which behaves like a weighted sum of the input composed with a step function. A perceptron is trained by adjusting the weights and the threshold values so that it fits a given set of examples. A feed-forward neural
network (FFN) is essentially a directed acyclic graph in which each node is a perceptron. The most common graph topology for a FNN consists of several layers of perceptrons so that each output from any given layer is connected to all inputs of the subsequent layer. A FFN is trained using back-propagation, which is a family of gradient-descent algorithms for adjusting the weights and thresholds of the perceptrons to match a given training data set.
Traditional FFNs have been successfully applied to many machine learning problems, however when it comes to the task of sequence-learning, the architecture of an FFN suffers from two main limitations: it cannot readily handle inputs of arbitrary length and it does not explicitly model time [31]. Furthermore, FFN models that implement some form of a sliding context window to implicitly capture the time dependency between the inputs cannot sufficiently model the time since the range of the captured dependency is limited by the size of the window [8, 12].
Recurrent neural networks (RNN), unlike FNNs, allow for the presence of cycles in their underlying topology. This creates memory-like effects in the network which allow dynamic temporal behaviour. The way in which the RNN is topologically structured is connected to both its expressiveness and the training algorithms. As a result of these compromises, RNN architectures can be very diverse.
Unlike FNNs, recurrent neural networks (RNNs) can readily handle inputs of arbitrary length and can model the temporal patterns present in sequential data. On the other hand, the expressive power of an RNN grows exponentially with the number of its hidden nodes while the growth of the training complexity is maintained to a polynomial (at most quadratic growth) [31]. In most of sequence learning tasks, an RNN or a variant of it is usually the state-of-the-art method. RNNs have been applied not only to learning natural languages, but also artificially generated languages of algorithmic patterns, and proved themselves to be more effective than other methods [29].
The addition of the recurrent edges to the architecture of RNNs gives them great expressive powers, however, they also introduced the ‘vanishing and ex-
ploding gradient’ problem which occurs while training the network when the errors are back-propagated across many time steps [7]. The Long Short-Term Memory (LSTM) is a crucial variant of RNNs that was introduced by [26] specifically to address this problem. Unlike conventional nets in which the weights have the role of an implicit and quite rudimentary memory, LSTMs have explicit memory cells in their architecture, used to store gradient information for training. The architecture of the LSTM is quite sophisticated and a detailed presentation is beyond the scope here, but accessible tutorials are available [37].
5.2. Machine learning for programming languages
In this exercise we have intentionally used a particularly simple, off-the-shelf, LSTM-based algorithm for latent semantic analysis. All the parameters of the computational experiments were fixed in advance and were not tweaked to improve results. The results were in general excellent. We noted that pointer models, which have more structure, tended to be more amenable to learning than pointer-free models, and also that sequential models, which have more structure, tend to be more recognisable than concurrent models. Investigating whether more structured languages are more learnable is a general feature of LSTM-based language learning would be an interesting exercise for the future. We also note that in general 10k samples was enough, except for detecting intentional perturbations in pointer-free models when models constructed with 100k samples were good, whereas models constructed with 10k samples were unsatisfactory. We note that our convergence criterion was fixed (13 epochs) and it did not take into account residual learning rates. Examining the training logs suggests that training for these models had high residual learning rates, thus extra epochs might have helped. It would be interesting to re-evaluate this study by changing the stopping criterion by considering the evolution in time of the learning rate rather than fixing the number of epochs.
As in all optimisation work many parameters can be tweaked in search of improvement, but doing that would detract from the main point of our paper, which is that the game model is a representation of the semantics of program-
ming languages which is amenable to machine learning via LSTMs. Details notwithstanding, we find this fact alone quite remarkable.
Using machine learning for programming language semantics is largely new and unexplored terrain, even though heuristic search techniques such as genetic algorithms have been applied to software engineering problems [24]. This is a well researched area which is related but complementary to our interest. Primarily, search-based software engineering (SBSE) is a collection of syntactic techniques, which rely on manipulation of code, usually as a syntax tree, to extract information about the code, to manipulate the code, or to detect patterns in the code (common bugs, anti-patterns, etc.). There is a significant area of overlap between the aims and techniques of SBSE and other heuristic-heavy programming-language analyses and manipulation such as refactoring, slicing, test-generation, or verification. By contrast, semantic models are independent of syntax. In fact the kind of analysis we have proposed here ignores syntax and relies directly on program behaviour instead. Indeed, latent semantic analysis of code when the source code is available is trivial: one can merely scan it for occurrences of terms associated with concurrency, such as parallel execution or semaphores. The problem becomes more interesting when the source code is not available: given a piece of compiled code, e.g. a module or a library, can we determine whether it originates in one language or another just by examining the way it interacts with its calling context? Our analysis shows that at least sometimes the answer is positive.
There are some obvious limitations to our approach. First of all we looked for distinctions in plays rather in strategies, just because learning strategies (potentially infinite sets of plays) seems significantly harder than learning the plays themselves. But there are semantic differences between languages which are only reflected at the level of the strategy. For example PCF [27], sequential IA [3] and non-deterministic IA [25] have the same notion of legality on plays but differ at the level of strategies. PCF requires innocent strategies, sequential IA deterministic strategies, and non-deterministic IA non-deterministic strategies. Moreover, the formulation of these distinctions requires both pointers and
answer-values, information which we abstract away from our modelling. Using our set-up these distinctions are lost. Capturing such subtle distinctions would require a different approach.
5.3. Future work
However, by and large the results of our experiment are very encouraging. The quality of the models is high, as evidenced by their robust discriminatory powers, and the required computational resources modest. These results make us optimistic about using this methodology on practical programming languages as encountered ‘in the wild’. The process of creating a corpora of training traces in the absence of a model is of course different. We need a large code base, a compiler, and a way to instrument the interface between a part of the code taken to be as ‘the term’ and the rest of the program taken to be ‘the context’. Much like a profiler, the instrumentation should record how, in any execution, the term interacts with the context via its free variables (function or method calls and returns).
What is interesting is that a model of code obtained from real code ‘in the wild’ will learn not only what is ‘legal’ behaviour but also what is ‘idiomatic’ behaviour – patterns of behaviour which are specific to the code-base used for learning. Depending on the quality of the model this can have some possibly interesting applications. Note that this same phenomenon appears in the case of machine-learning natural language from corpora [10], except that in the case of programming languages the idiomatic aspects are more likely to be seen as embodiments of de-facto practices rather than problematic biases.
For example, the model can be used for novelty detection [32, 33] in order to augment code inspections: instead of merely studying the code syntactically, the behaviour of code-in-context can be analysed for conformance with the existing body of code. Syntax-independent novelty analysis can have other well-known applications, for example to security. Unexpected or unusual patterns of interactions are, for example, typical for attempts to compromise the integrity of a system.
A recogniser running in reverse is a generator, and generating valid traces – especially idiomatic valid traces – is a possibly interesting way of automating the testing of functional interfaces. Generating random data for automating testing is a well-understood process [11]. However, generating random functional behaviour is a much more complicated proposition, and syntactic approaches do not seem equally promising.
Semantic-directed techniques, in particular using models that are both compositional and operational such as trace semantics or game semantics, have been advocated for a long time but did not make as deep inroads as expected in the practice of programming. A pragmatic disadvantage is that semantic models can be mathematically demanding, but this is not even the main problem. The main difficulty is that on the balance they are both difficult to construct and brittle, in the sense that small changes to the language can require a total re-thinking of its semantic model. Moreover, most languages are not syntactically (and semantically) self-contained because they interact with other languages via mechanisms such as foreign function interfaces. Machine learning, if effective on real languages, solves both these problems. It hides the mathematical complexity of the model behind the automated learning, and it can derive models out of existing code-bases, capturing not only a sense of what is legal but also what is idiomatic (for engineering, but also cultural reasons) in a particular language. In the end, as it tends to be the case with machine learning, the resulting model may be opaque and uninformative but it may end up being effective enough for practical purposes.
Finally, our research has incidentally discovered an unexpected limitation of neural nets in learning equivariant patterns, i.e. patterns closed under permutation. Even a simple equivariant pattern such as \texttt{abab} was beyond the ability of a LSTM to generalise, even from thousands of examples, whereas a human subject will only need a handful of examples to infer the pattern in sequences such as \texttt{John, Mary, John, Mary} or \texttt{Tony, Dave, Tony, Dave} or \texttt{Foo, Bar, Foo, Bar} is. Since equivariance is of critical importance in understanding names, either in the context of programming languages (variables) or natural languages (proper
names) we think we have identified a significant new challenge for neural nets, and machine learning in general. This should be investigated more deeply.
Acknowledgments. This paper is motivated by a challenge from Martin Abadi. Preliminary experiments were conducted by Victor Patentasu and were presented at the Off the Beaten Track workshop of POPL 2017. Khulood Alyahya has been supported by EPSRC grant EP/N017846/1. Dan R. Ghica has been supported by EPSRC grant EP/P004490/1. We thank the anonymous reviewers and the journal editor for their many useful suggestions.
[37] Christopher Olah: *Understanding LSTM Networks*. Available at http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
|
{"Source-Url": "http://pure-oai.bham.ac.uk/ws/files/59436187/Latent_semantic.pdf", "len_cl100k_base": 13479, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 89269, "total-output-tokens": 19108, "length": "2e13", "weborganizer": {"__label__adult": 0.0005488395690917969, "__label__art_design": 0.0006356239318847656, "__label__crime_law": 0.0004334449768066406, "__label__education_jobs": 0.0016021728515625, "__label__entertainment": 0.00017631053924560547, "__label__fashion_beauty": 0.0002455711364746094, "__label__finance_business": 0.00025916099548339844, "__label__food_dining": 0.000530242919921875, "__label__games": 0.0018787384033203125, "__label__hardware": 0.0009336471557617188, "__label__health": 0.0006651878356933594, "__label__history": 0.0003995895385742187, "__label__home_hobbies": 0.0001399517059326172, "__label__industrial": 0.0004949569702148438, "__label__literature": 0.0011034011840820312, "__label__politics": 0.0003867149353027344, "__label__religion": 0.0007152557373046875, "__label__science_tech": 0.06719970703125, "__label__social_life": 0.0001404285430908203, "__label__software": 0.006694793701171875, "__label__software_dev": 0.91357421875, "__label__sports_fitness": 0.00038242340087890625, "__label__transportation": 0.00067901611328125, "__label__travel": 0.00022161006927490232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70573, 0.04261]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70573, 0.6737]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70573, 0.90254]], "google_gemma-3-12b-it_contains_pii": [[0, 1956, false], [1956, 2200, null], [2200, 3818, null], [3818, 5846, null], [5846, 8176, null], [8176, 10490, null], [10490, 12807, null], [12807, 14944, null], [14944, 16918, null], [16918, 19227, null], [19227, 20991, null], [20991, 22593, null], [22593, 24273, null], [24273, 26332, null], [26332, 28345, null], [28345, 30256, null], [30256, 32745, null], [32745, 35051, null], [35051, 37446, null], [37446, 37973, null], [37973, 40076, null], [40076, 40397, null], [40397, 42407, null], [42407, 42973, null], [42973, 44217, null], [44217, 46616, null], [46616, 48137, null], [48137, 49482, null], [49482, 51711, null], [51711, 53982, null], [53982, 56354, null], [56354, 58457, null], [58457, 60826, null], [60826, 62562, null], [62562, 64190, null], [64190, 65911, null], [65911, 67557, null], [67557, 69150, null], [69150, 70573, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1956, true], [1956, 2200, null], [2200, 3818, null], [3818, 5846, null], [5846, 8176, null], [8176, 10490, null], [10490, 12807, null], [12807, 14944, null], [14944, 16918, null], [16918, 19227, null], [19227, 20991, null], [20991, 22593, null], [22593, 24273, null], [24273, 26332, null], [26332, 28345, null], [28345, 30256, null], [30256, 32745, null], [32745, 35051, null], [35051, 37446, null], [37446, 37973, null], [37973, 40076, null], [40076, 40397, null], [40397, 42407, null], [42407, 42973, null], [42973, 44217, null], [44217, 46616, null], [46616, 48137, null], [48137, 49482, null], [49482, 51711, null], [51711, 53982, null], [53982, 56354, null], [56354, 58457, null], [58457, 60826, null], [60826, 62562, null], [62562, 64190, null], [64190, 65911, null], [65911, 67557, null], [67557, 69150, null], [69150, 70573, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70573, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70573, null]], "pdf_page_numbers": [[0, 1956, 1], [1956, 2200, 2], [2200, 3818, 3], [3818, 5846, 4], [5846, 8176, 5], [8176, 10490, 6], [10490, 12807, 7], [12807, 14944, 8], [14944, 16918, 9], [16918, 19227, 10], [19227, 20991, 11], [20991, 22593, 12], [22593, 24273, 13], [24273, 26332, 14], [26332, 28345, 15], [28345, 30256, 16], [30256, 32745, 17], [32745, 35051, 18], [35051, 37446, 19], [37446, 37973, 20], [37973, 40076, 21], [40076, 40397, 22], [40397, 42407, 23], [42407, 42973, 24], [42973, 44217, 25], [44217, 46616, 26], [46616, 48137, 27], [48137, 49482, 28], [49482, 51711, 29], [51711, 53982, 30], [53982, 56354, 31], [56354, 58457, 32], [58457, 60826, 33], [60826, 62562, 34], [62562, 64190, 35], [64190, 65911, 36], [65911, 67557, 37], [67557, 69150, 38], [69150, 70573, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70573, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
165dd8c89854c4aa73e457dccb3db450973cc772
|
C2TACO: Lifting Tensor Code to TACO
Citation for published version:
Digital Object Identifier (DOI):
10.1145/3624007.3624053
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
GPCE 2023: Proceedings of the 22nd ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
C2TACO: Lifting Tensor Code to TACO
José Wesley de Souza Magalhães
jwesley.magalhaes@ed.ac.uk
University of Edinburgh
UK
Jackson Woodruff
J.C.Woodruff@sms.ed.ac.uk
University of Edinburgh
UK
Elizabeth Polgreen
elizabeth.polgreen@ed.ac.uk
University of Edinburgh
UK
Michael F. P. O’Boyle
mob@inf.ed.ac.uk
University of Edinburgh
UK
Abstract
Domain-specific languages (DSLs) promise a significant performance and portability advantage over traditional languages. DSLs are designed to be high-level and platform-independent, allowing an optimizing compiler significant leeway when targeting a particular device. Such languages are particularly popular with emerging tensor algebra workloads. However, DSLs present their own challenge: they require programmers to learn new programming languages and put in significant effort to migrate legacy code.
We present C2TACO, a synthesis tool for synthesizing TACO, a well-known tensor DSL, from C code. We develop a guided enumerative synthesizer that uses automatically generated IO examples and source-code analysis to efficiently generate dense tensor algebra code. C2TACO is able to synthesize 95% benchmarks from a tensor benchmark suite, outperforming an alternative neural machine translation technique, and demonstrates substantially higher levels of accuracy when evaluated against two state-of-the-art existing schemes, TF-Coder and ChatGPT. Our synthesized TACO programs are, by design, portable achieving significant performance improvement when evaluated on a multi-core and GPU platform.
CCS Concepts: • Software and its engineering → Source code generation; Domain specific languages.
Keywords: Program Lifting, Synthesis, TACO, Tensor Algebra
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
GPCE ’23, October 22–23, 2023, Cascais, Portugal
© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 979-8-4007-0406-2/23/10...$15.00
https://doi.org/10.1145/3624007.3624053
1 Introduction
In the last decade, we have witnessed a dramatic increase in machine learning (ML) use in applications ranging from cloud computing to edge devices [45]. ML workloads are dominated by tensor code [60], leading to large-scale efforts aimed at improving its performance [67]. Dense tensor algebra is highly parallel, allowing efficient hardware exploitation across platforms. However, extracting effective parallelism from existing languages is difficult with current compiler technology. This language/compiler failure has led to the growth of domain-specific languages (DSLs) aimed at efficient linear algebra e.g., Diesel [27], TACO [37]. These DSLs deliver excellent cross-platform performance outperforming existing approaches [38].
Accessing such performance is straightforward for new applications: just write your program in the appropriate DSL. However, for legacy programs, it is more problematic with the programmer responsible for both rewriting sections in the new DSL and reintegration with the existing application. As DSLs continuously evolve, this rewriting must be repeated several times throughout the lifetime of the application. This is costly and error-prone, presenting a serious barrier to existing applications to harness hardware performance.
1.1 Existing Techniques
Rewriting is a significant issue and there are a number of different approaches aimed at automatically porting programs to access hardware performance without programmer effort.
API Matching: Rather than translate programs into high level-DSLs, some techniques aim to match and replace sections of user code with fast libraries. For example, Ginsbach et al. [30], De Carvalho et al. [24], and Martínez et al. [44] propose schemes to discover specific code patterns, such as matrix multiplication, and replace them with accelerator
calls. However, these matching tools are often brittle and cannot be extended. They require retooling whenever the target API changes, which makes such approaches non-portable.
**Program Lifting via Synthesis:** There are several lifting approaches based on program synthesis, i.e., algorithms for generating programs from specifications. Synthesis is used directly to lift legacy code in the work by Kamil et al. [34], where the user defines the region of code to lift. However, a compiler from the program source to the internal format and a decompiler to the high-level DSL have to be provided, limiting the applicability of this approach to new DSLs and legacy software. The synthesis used in this lifting is reliant on SMT solvers to guarantee correctness and drive search. This means these techniques cannot be easily applied to the benchmarks we tackle in our paper, which, owing to pointers, and unbounded tensors and loops, are too complex for the state-of-the-art SMT-solver driven software verification tools to reason about. We attempt to verify bounded correctness for some of our synthesized code, but even for simple benchmarks, we cannot verify correctness for tensors of size more than $10 \times 10 \times 10$ within a timeout of 1 hour. This makes this verification impossible to embed into a synthesis loop where we check thousands of candidates. Our synthesis must use alternatives like observational equivalence [22] in order to achieve the necessary scalability.
**Neural Machine Translation (NMT):** Language models have proved useful in translation/transpilation tasks. In the work by Roziere et al. [55], an unsupervised Java to C# model is learned using a sequence-to-sequence transformer. It is shown to be reasonably accurate, however, like most NMT techniques, it requires a large corpus of source and target code which is not available for emerging DSLs where most source programs do not have a corresponding domain-specific representation.
### 1.2 Our Approach
This paper presents C2TACO, a synthesis tool for lifting dense tensor code written in C to TACO. We propose a guided enumerative synthesis method to generate TACO programs based on automatically generated IO examples. We use source code analysis to retrieve features from the original programs and use them as search aids during synthesis.
We compared the performance of C2TACO against a neural machine translation approach and two state-of-the-art existing schemes, TF-Coder [59] and ChatGPT [48]. When evaluated on a suite of tensor benchmarks, C2TACO is able to synthesize 95% of the programs, demonstrating considerably higher accuracy than the other techniques (10%, 32% and 24% respectively). Because they are portable, our lifted TACO programs achieve significant performance improvements over the original implementation when evaluated on a multi-core (geo-mean 1.79x) and GPU (geo-mean 24.1x) platform.
This paper makes the following contributions:
- A guided program synthesis technique that discovers and lifts legacy C code to TACO, a domain-specific tensor language based on behavioral equivalence and on the original program structure.
- An extensive evaluation against existing synthesizer and neural machine translation models, showing that our approach has higher coverage and is more accurate than existing approaches.
### 2 Motivation
In this section we briefly introduce TACO and describe how and why we lift C to TACO.
#### 2.1 TACO
TACO [38] is a high-level programming language for tensor contractions. A tensor is a generalization of a matrix (order 2) to higher orders. It supports tensor expressions of unbounded length and supports tensors of unbounded order. The core TACO language is based on Einstein summation notation (Einsum) allowing concise representation of tensor computation using tensor index notation. It has been used in other frameworks including TVM [19].
Consider the matrix-vector product and summation example: $a_i = \sum_k X_{i,k} b_k + c_i \forall i$. In C a simple sequential implementation would result in the code shown in Figure 1. While straightforward, targeting this code for different platforms such as multi-cores or GPUs would require significant code restructuring. Writing the example in TACO gives:
$$a(i) = X(i, k) \ast b(k) + c(i).$$
This is nearer the original formulation and, crucially, does not include any assumptions about whether the platform is sequential or parallel. The TACO compiler takes this program as input and generates platform-specific optimized code.
#### 2.2 Example
We take existing legacy C code, lift it to TACO, and then use TACO’s code generation abilities to target diverse, high-performance platforms. Consider the program in Figure 2. This is a C function from the DSPStone benchmark suite [72].
which makes use of post-increment pointer arithmetic to target the addressing modes found in DSP processors. Although the pointers are a hindrance to understanding, this program is in fact matrix multiplication.
C2TACO uses automatically-generated input/output examples as a specification for an enumerative synthesis algorithm. C2TACO uses information about the C program to guide a search through the TACO grammar in a type-directed template-based enumerative fashion and produces the TACO code shown in Figure 2. As well as being higher-level and easier to read than the original C code, the synthesized TACO program can be optimized and targeted at different platforms.
Figure 2 shows the code generated from tensor index notation for a multi-core CPU and an NVIDIA GPU. For the CPU, the TACO compiler generates OpenMP code with a dynamic runtime schedule policy. So in effect, lifting is an automatic parallelization method for certain C programs. For the NVIDIA GPU, the TACO compiler generates CUDA code (also shown in Figure 2). Although, the code is syntactically distinct from the OpenMP version, the TACO compiler again exploits parallelism with implicit concurrency across all of the threads executing the shown kernel.
2.3 Validity
Our synthesized TACO programs are demonstrated to have observational equivalence with the original programs in C. We also manually inspect the synthesized code. Proving these programs are equivalent is a challenging task due to the unbounded loops and data structures and pointers present in the code. Using CBMC [39], a model checker for C programs, we are able to verify three representative benchmarks using very small loop bounds and tensors (in one case, we can only verify up to a loop bound of 10 within a timeout of one hour). Full verification of synthesized code is an open challenge and out of the scope of this paper.
3 Overview
Figure 3 shows our overall approach. We summarize the pipeline of C2TACO and describe the key components in sections 4 and 5 followed by an extensive evaluation (Section 7).
Given a program P written in C, we first detect the program sections that are suitable for lifting using neural program classification. Once we have extracted the candidate regions, we generate input-output (IO) examples which are then used as a specification for our synthesis scheme. Our system performs a series of static code analysis to extract relevant features from K. We then search the TACO grammar for equivalent programs that satisfy the IO specification using the features of K to prune the program space. Once we have identified a suitable equivalent TACO program T, we...
lower it to the target platform and insert it into the original program for execution.
3.1 Classification
We take as input general-purpose programs that perform varied computations and perform lifting to a domain-specific language for tensor contractions. Because we cannot express general computation in TACO, there is a need to identify the code regions that can be lifted and accelerated. We use prior work in neural program classification [69] to determine which parts of the program represent tensor operations.
3.2 IO Generation
Our synthesizer is driven by a specification of observational equivalence (i.e., randomly generated input-output examples). We generate 10 input-output examples. Whilst this means that we cannot guarantee absolute equivalence of the synthesized and source code, it allows our synthesis to scale to programs too complex to be reasoned about by the SMT solvers that drive other lifting techniques [17].
3.3 Lifting via Synthesis
Once we have the IO examples of the code to lift, we explore the space of TACO programs using enumeration of templates over TACO’s grammar to generate programs that may be equivalent to the original C program. We execute each candidate on the IO samples to see if it is equivalent. The Enumerative Template Synthesis algorithm is described in Section 4. Given the unbounded size of the TACO program space, this can lead to excessive synthesis time. We, therefore, introduce a compiler tool that extracts a set of features from the original C program and use it to guide search, as described in Section 5.
3.4 Lowering
Once we have a suitable candidate TACO program, we then compile it to the target platform using TACO’s platform-specific optimizing compilation. In this paper, we investigate multi-core and GPU targets. The generated code is then patched into the original calling program and evaluated on the target platform.
4 Enumerative Template Synthesis
The task of automatically lifting C to TACO can be defined as a formal program synthesis problem. That is, given a source program \( P_C : \vec{x} \rightarrow \vec{y} \), which is written in C, we wish to find an equivalent program \( P_T : \vec{x} \rightarrow \vec{y} \), written in TACO, such that the specification \( \forall \vec{l} \in \vec{x}.P_C(\vec{l}) = P_T(\vec{l}) \), i.e., the TACO program behaves identically to the C program on all possible inputs.
We use a bottom-up enumerative synthesis algorithm to enumerate template TACO programs, i.e., TACO programs that use symbolic variables in place of all tensors and constants. We then check whether there is a valid substitution of inputs and constant literals for these symbolic variables that satisfies the specification. The enumeration of our algorithm is based on classic algorithms in the literature [9, 66], while the use of a sub-procedure to instantiate concrete variable names and constant literals is based on CEGIS(T) [3].
4.1 The Grammar
Our synthesis algorithm enumerates through a grammar $G$, shown in Figure 4, which defines a search space of possible template TACO programs. The grammar $G$ is defined as a set of nonterminal symbols $NT$, terminal symbols, and production rules $R$. For each rule $r \in R$, $|NT|$ indicates the number of non-terminal symbols in the rule. We refer to the nonterminal symbols on the right-hand side of a rule in the order they appear as $NT_0, NT_1, \ldots$. For example, for the production rule $\{PROGRAM\} := \{TENSOR\} = \{EXPR\}$, the nonterminals are $NT_0 = \{TENSOR\}$ and $NT_1 = \{EXPR\}$, and $|NT| = 2$.
The grammar includes symbolic constants and symbolic tensor IDs. When we test the program, we substitute these IDs and symbolic constants with input variables and constants from the source program and test all valid substitutions until we find a program that satisfies the specification. We limit our grammar to 4 index variables, which limits the number of tensor dimensions we can reason about to 4.
\[
\begin{align*}
\{PROGRAM\} & ::= \{TENSOR\} = \{EXPR\} \\
\{TENSOR\} & ::= \{ID\} (\{INDEX-EXPR\}) | \{ID\} \\
\{INDEX-EXPR\} & ::= \{INDEX-VAR\}, \{INDEX-VAR\} | \{INDEX-VAR\} \\
\{INDEX-VAR\} & ::= i | j | k | l \\
\{EXPR\} & ::= \{EXPR\} + \{EXPR\} | \{EXPR\} - \{EXPR\} | \{EXPR\} \times \{EXPR\} | \{EXPR\} \div \{EXPR\} | \{CONSTANT\} | \{TENSOR\} \\
\{ID\} & ::= T_0 | T_1 | T_2 | \ldots \\
\{CONSTANT\} & ::= C_0 | C_1 | C_2 | \ldots
\end{align*}
\]
Figure 4. TACO grammar.
4.2 Specification
Given a source function $P_C : \vec{x} \rightarrow \vec{y}$, we wish to find an equivalent TACO function $P_T : \vec{x} \rightarrow \vec{y}$ such that $\forall I \in \vec{x}.P_C(I) = P_T(I)$. Checking this equivalence is undecidable in general, however, due to the lack of data-dependent control-flow in TACO programs, it is sufficient in almost all cases to check observational equivalence.
We extend the method set out in FACC [69], where inputs are randomly generated according to manually given constraints dictating the length of arrays and favoring smaller values to make evaluation faster. We constrain arrays to be of size 4096, and fix tensor-dimensions to be equal (e.g., a 2-dimensional tensor is of size $64 \times 64$).
A single input-output example $I, O$ consists of a set of randomly generated arguments $I = (i_1, \ldots, i_m)$, corresponding to the input parameters $\vec{x} = (x_1, \ldots, x_n)$, and an output $O = P_C(i_1, \ldots, i_m)$. We generate 10 input-output examples which form a specification $\phi_{IO} = \{(I, O)_1, \ldots, (I, O)_{10}\}$. A program $P_T$ satisfies the specification $\phi_{IO}$ iff $\forall (I, O) \in \phi_{IO}, P_T(I) = O$.
To determine this in practice, we run $P_T$ using the TACO Python API, checking if the behavior matches the corresponding outputs.
4.3 Template Enumeration
We implement bottom-up enumeration i.e., we enumerate templates starting with the shortest first. We define the length of a template as the number of references to tensors or constants in the template, e.g., the template $T_0[i] = T_1[i] + 2$ has length 3 because it refers to $T_0, T_1$, and 2.
We enumerate templates as shown in Algorithm 2, by iterating through production rules until we have found all possible complete templates of length 1 in the grammar. We then increase the length and repeat the process, using the previously enumerated templates as building blocks, until we have hit the maximum user-given length. Each time the length increases, we add a new tensor ID and a new symbolic constant to the set of candidate templates. This is shown in Algorithm 1.
We discard any invalid candidates during enumeration, i.e., templates that do not type check or are unsupported by TACO. More specifically we discard: any candidate that iterates over two different dimensions with the same index variable (e.g., $T_0(i, i)$); any candidate where the same tensor appears more than once in a program with different orders (e.g.: $T_0(i) = T_1(i) * T_1(i, j)$); and any candidate where the same tensor appears on both sides of an assignment (e.g.: $T_0(i, j) = T_0(i, j) + T_1(j, k)$).
4.4 Instantiating Templates
After we have generated all templates of length $L$, we check whether any of these templates generate programs that satisfy the specification, $\phi_{IO}$ (see Section 4.2). To do this, we enumerate through all substitutions that map all symbolic constants in the candidate program to concrete values, and all tensor IDs to inputs in the specification, until we find a substitution that gives us a TACO program that satisfies the specification. This is shown in Algorithm 2. We limit the concrete constant values to constants present in the source program.
We check all possible substitutions until we find a substitution that results in a complete TACO program that satisfies the specification, which is checked by the check procedure. Although checking all possible substitutions has $L!$ complexity for a template of length $L$, $L$ is typically small ($< 5$). We check the templates of length $MAX$ before any shorter templates, as this is the likely length of the target program.
The length of a TACO program is related to the number of input arrays, or uses. However, temporary variables to capture common subexpressions and mutable arrays mean that there is no direct correspondence. Fixing the size of the target TACO program reduces the search space because we only have to enumerate candidates once.
To determine the range of sizes C2TACO explores, we focus on the definition of the output array and examine the number of input arrays, or uses [23]. At each definition, we iteratively build a set of variables used by that definition. We use reaching analysis to disambiguate between different references to the same (mutable) variables. We then reduce the constructed set in the presence of summations or reductions.
In C, when writing a reduction or summation, a variable appears on both sides of an assignment but only once in the TACO program. For this reason, we apply simple data dependence analysis to check if there is a recurrence. If there is, we do not count it twice.
For example, in Figure 1, we have the use set \( \sum, X, b, c \) for the output array \( a \). This is reduced to \( X, b, c \) after detecting the reduction on \( \sum \) to give \( 4 (a, X, b, c) \) as the predicted number of tensors in the TACO program.
5.2 Tensor Dimensions
C programs frequently contain linearized arrays: where a single pointer is used to represent a multi-dimensional tensor. In C, when writing a reduction or summation, a variable appears on both sides of an assignment but only once in the TACO program. For this reason, we apply simple data dependence analysis to check if there is a recurrence. If there is, we do not count it twice.
For example, in Figure 1, we have the use set \( \sum, X, b, c \) for the output array \( a \). This is reduced to \( X, b, c \) after detecting the reduction on \( \sum \) to give \( 4 (a, X, b, c) \) as the predicted number of tensors in the TACO program.
5 Synthesis Guided by Code Analysis
The search space of possible TACO templates is large, and so, in C2TACO, we use program analysis to focus the scope of the synthesis search, prioritizing candidates that are more likely to be correct. In particular, we use heuristics to estimate the correct TACO template length (section 5.1), the correct dimensions (section 5.2) and the operators (section 5.3).
5.1 TACO Program Length
The length of a TACO program is related to the number of array/pointer references and constants in the original C code. However, temporary variables to capture common subexpressions and mutable arrays mean that there is no direct correspondence. Fixing the size of the target TACO program reduces the search space because we only have to enumerate candidates once.
To determine the range of sizes C2TACO explores, we focus on the definition of the output array and examine the number of input arrays, or uses [23]. At each definition, we
vector \( J = [k, i, f]^T \), and \( UJ \) is the affine expression for the array access \([Z \ast k + i]\):
\[
UJ = [Z, 1, 0] \begin{bmatrix} k \\ i \\ f \end{bmatrix} = [Z \ast k + i]
\]
We now delinearize by constructing a transformation \( S \), such that \( SU \) gives a matrix of 1’s and 0’s. For our example \( S = [(\%)Z, (\%)Z] \). For details of this step, we refer the reader to the paper [47]. We apply the transformation to give:
\[
SUJ = \begin{bmatrix} ()/Z \\ (\%)Z \end{bmatrix} [Z, 1, 0]J = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} J
\]
This gives us a 2D delinearized array access \( p_{c}[k, i] \), so we begin our search for TACO programs using 2D tensors.
### 5.3 Operator Analysis
Finally, we use the source code to predict which operators are likely to be included in the target program. We do this based on a straightforward analysis of the Abstract Syntax Tree of the source code, which counts the number of appearances of each operator type. This effectively reduces the search space of possible TACO programs by eliminating unlikely combinations of operators.
### 6 Experimental Methodology
To evaluate C2TACO we compared its performance against other techniques. We implemented a simple version of the synthesis process described in Section 4 and an alternative approach based on neural machine translation. In addition, we consider an existing large language model ChatGPT and IO-based synthesizer, TF-Coder.
#### 6.1 Alternative Approaches
**ETS.** C2TACO uses the synthesis algorithm described in Section 4 combined with the heuristics described in Section 5. To evaluate the contribution of the heuristics in C2TACO, we compare to the most basic enumerative template synthesis algorithm described in Section 4 (without any heuristics), which we refer to as ETS.
**Neural Machine Translation.** NMT converts text sequences from one language to another by means of a deep neural network and has shown positive results on code tasks. We therefore frame the task of lifting C to TACO as a neural machine translation problem. We train a Transformer [68] that given a C input sequence minimizes the edit distance between the predicted and ground-truth TACO. Once trained, then, given an unseen C program, the model will generate the most likely equivalent TACO program.
The main challenge for any new DSL is the availability of training data. To overcome this, we generate a synthetic dataset based on the TACO grammar shown in Figure 4. We compile the synthetically generated TACO programs to generate the equivalent C programs. We limit our synthetic dataset to programs that contain a maximum of 5 tensors of no more than 4 dimensions, and where all datatypes are integers.
We enumerate this space in a bottom-up manner, similar to the enumeration performed by our synthesis algorithm, and use testing to eliminate semantically equivalence programs. Since TACO-generated programs contain details that are unlikely to be present in real-world tensor kernels such as memory allocation, we modify the clang compiler to extract only the kernel signature and computation of the program for our equivalent C program.
We generate 800K pairs of C program and TACO expressions of which we separated 5K for validation, 5K for test, and the remaining were used for training. The trained model is a Transformer with 6 encoders and 6 decoders with 16 attention heads and an embedding size fixed at 1024.
#### 6.2 Existing Approaches
**TF-Coder.** TF-Coder [59] is an open-source publicly available program synthesizer. It takes a single input-output example as source and generates a corresponding TensorFlow program. Although the search space of TF-Coder is not defined by the same grammar we considered in our synthesis methods, we compare C2TACO against TF-Coder because both synthesize programs from IO examples and operate on the domain of tensor computations. We use one of the IO examples automatically generated by our synthesis scheme, but limit it to less than 100 elements as required by TF-Coder.
**ChatGPT.** ChatGPT [48] is large-scale language model based on GPT 3.5. It has been used for a wide number of tasks including code generation. We used version 3.5 in our experiments. As its accuracy depends on the quality of its prompts, we experimented with various formats and found the following to be the most effective, followed by the original source code:
"Translate the following C code to an expression in the TACO tensor index notation. The expression must be valid as input to the taco compiler. Return the expression and only the expression, no explanations."
#### 6.3 Setup
**Benchmarks.** To evaluate C2TACO, we designed two different suites of tensor algebra benchmarks. The first contains C programs generated by the TACO compiler a distinct subset of those used to train the NMT model. The second contains programs from existing software libraries. We refer to these suites as artificial and real-world respectively.
The real-world benchmarks originate from different applications. We selected a subset of the programs used by previous synthesis work [22]:
\[
\begin{align*}
\text{Proof:} & \quad \forall x, y, z \in \mathbb{R} \\
\text{Goal:} & \quad x + y = z \\
\text{Constraints:} & \quad x > 0, y > 0, z > 0 \\
\end{align*}
\]
Table 1. Synthesis coverage of different approaches on the artificial dataset.
<table>
<thead>
<tr>
<th>TACO Program</th>
<th>TF-Coder</th>
<th>ChatGPT</th>
<th>NMT</th>
<th>ETS</th>
<th>C2TACO</th>
</tr>
</thead>
<tbody>
<tr>
<td>a(i) = b(i) + c(i) - d(i)</td>
<td>✓</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i,j) = b(i,j) + c(i,j)</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i) = b(i) * c(i)</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i) = b(i) + c(i) + d(i) + e(i)</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
</tr>
<tr>
<td>a(i,j) = b(i,j) * c(j)</td>
<td>✗</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i,j) = b(i,k) * c(k,j)</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i,j) = b(i)</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i,j) = b(i,j,k) * c(k)</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>a(i,j) = b(i,k,l) * c(l,j) * d(k,j)</td>
<td>✓</td>
<td>✓</td>
<td>✗</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
- **blas**: baseline implementation of functions from the BLAS [18] linear algebra library as synthesized by Collie et al. [20].
- **DSP**: signal processing functions adapted from the TI [2] library.
- **makespeare**: programs that manipulate arrays of integers. Originally from Rosin [54].
- **mathfu**: mathematical functions from the Mathfu [1] library.
- **simpl_array**: problems performing different computation on arrays of integers. Originally from the work by So and Oh [62].
In addition to those, we extracted benchmarks from other suites that contain tensor manipulations:
- **darknet**: neural network operations from the Darknet [53] deep learning framework.
- **DSPStone** and **UTDSP**: kernels targeting digital signal architectures from the DSPStone [72] and UTDSP [56] suites.
We gathered 71 benchmarks in total, of which 10 are artificial and 61 come from real-world code.
**Software.** ETS and C2TACO are implemented in Python version 3.8.10. The NMT Transformer model is implemented using Fairseq [49] 0-12.2 with Google’s SentencePiece [40] as the tokenizer. The analyses described on Section 5 are implemented as plugins for the clang compiler version 14.0.0. Operating system is Ubuntu 20.04.6 LTS.
**Hardware.** We evaluate on a multi-core CPU and GPU platform. The targeted CPU is an 8-core Intel i5-1135G7 at 2.40GHz with 16 GB of RAM (LPDDR4) at 4267 MT/s. The GPU is an NVidia GeForce GTX 1080 Ti using driver version 535.54.03 and CUDA runtime version 12.2.
**Metrics.** We evaluated the performance of each approach by executing its generated code 10 times and recording the median. In our experiments, we saw little execution time variance. We measure speedup as the ratio of the running time of lifted programs over the original version. Programs are compiled with gcc -O3 version 9.4. We also recorded the time to produce a lifted TACO program with a timeout of 90 minutes for all approaches in all the experiments conducted.
7 Evaluation
In this section, we evaluate against four criteria: coverage (Section 7.1), error rate (Section 7.2), synthesis time (Section 7.3), and speedup (Section 7.4).
7.1 Synthesis Coverage
Figure 5 shows the lifting coverage of each of the five schemes described in section 6 across the two benchmark suites: artificial and real-world.
**Benchmark Suite: Artificial.** As described in section 6, these are C kernels generated by the TACO compiler guaranteed to have an equivalent in the TACO language. The coverage of each scheme is shown in Table 1 and Figure 5. C2TACO is most effective, lifting all benchmarks correctly. ETS lifts 8 out of 10. In two cases it could not find the correct program in time as the space of possible grows too large. C2TACO overcomes this by using the code analysis information to focus the search on parts of the grammar where the programs are most likely to be the solution. TF-Coder is able to synthesize 4 out of 10 benchmarks but is unable to match the coverage of the other synthesis approaches. Like ETS scheme, it times out for the more complex programs.
NMT achieves higher accuracy, translating seven of the benchmarks with a success rate of 70%. C2TACO overcomes this by using the code analysis information to focus the search on parts of the grammar where the programs are most likely to be the solution. TF-Coder is able to synthesize 4 out of 10 benchmarks but is unable to match the coverage of the other synthesis approaches. Like ETS scheme, it times out for the more complex programs.
ChatGPT is able to correctly predict 5 of the 10 benchmarks, hallucinating the remainder. In four cases it produces syntactic invalid programs. The syntax errors include wrong
indices, multiple assignments, and duplication. In one case a tensor was treated as having different orders in the same program. In another, ChatGPT produces a program that is syntactically valid but incorrectly refers to the same tensor twice.
**Benchmark Suite: Real-World.** Real-world benchmarks are more challenging as shown in Figure 5. Both ETS and C2TACO are able to achieve high coverage of 85% and 95% respectively. ETS times out on 5 out of 61 while the sole instance of failure for C2TACO is the presence of program features not contemplated in our implementation of the grammar. TF-Coder manages to correctly synthesize 31% of the benchmarks. Along with timeouts, TF-Coder also produces programs that are semantically incorrect. We further discuss these in Section 7.2.
Real-world programs impose a harder challenge to neural machine translation due to the diversity of their implementation. While artificial programs have a syntactic structure identical to TACO-generated C programs, real-world ones are written in several different fashions, which makes it difficult for sequence-to-sequence methods to recognize patterns. NMT performs particularly poorly compared to the artificial case, generating no correct programs. This reinforces the view that it may be over-specific to a particular style of programming due to its training sample. ChatGPT also has a weak performance, only translating 20% of the benchmarks correctly. As well as in the artificial case, both approaches produce varied hallucinations as we detail below.
### 7.2 Error Analysis
We identify several different reasons for failure: a large search space causing timeout; syntactic and semantically wrong solutions. Figure 6 depicts a summary.
**Large Search Space.** Enumerative synthesis techniques explore a large search space, which grows as program length increases. This causes 60.42% of TF-Coder’s failures and all failures for ETS. Neural translation approaches, ChatGPT and NMT, always find a solution in time due as they translate a program in a sequence-to-sequence fashion and do not perform an extensive search. Although C2TACO is also based on enumeration, it never times out as program analysis restricts the search space sufficiently.
Figure 7. Lifting time on real-world benchmarks. Y-axis is on logarithmic scale.
**Syntactic.** TF-Coder, ETS, and C2TACO always produce programs that are syntactically correct. On the other hand, neural approaches frequently generate incorrect translations or hallucinations. In addition, 90% of the wrong translations produced by ChatGPT are syntactically incorrect. These hallucinations often include explanations of the ranges of index variables and use braces instead of parenthesis, which is the symbol used for indexation in the TACO tensor index notation language. Example 1 shows a syntactic hallucination produced by ChatGPT.
**Example 1.** When given as input a program that computes a dot product of two arrays \(b\) and \(c\), the expected solution expressed in TACO is
\[
a = b(i) \ast c(i)
\]
However, ChatGPT produced the string below which is not a valid TACO program.
\[
\text{sum}(a[i] \ast b[i] \text{ for } i \text{ in } 0..n)
\]
Although NMT is also neural-based it always produces well-formed programs. The difference is that NMT is trained on a domain-specific dataset containing only programs generated by the TACO compiler while ChatGPT is trained on more diverse data.
**Semantic.** These are programs that are syntactically correct, but produce the wrong output when executed. Almost 40% of TF-Coder failures are programs that are semantically wrong. TF-Coder relies on just one IO example and often fails to generalize. The majority of false positives produced by TF-Coder include manipulations on the shape of tensors, which is not present in any of the original benchmarks. Semantic hallucinations also correspond to 9.26% of the incorrect answers produced by ChatGPT. Example 2 shows an example of a hallucination produced by ChatGPT and Example 3 depicts one generated by TF-Coder.
**Example 2.** For a program that performs general matrix multiplication, the solution can be expressed in TACO as
\[
C(i, j) = A(i, k) \ast B(k, j)
\]
ChatGPT generates a program that includes an extra summation and reference to the resulting matrix on the right-hand side. Although that is equivalent according to C semantics, the same is not true in TACO.
\[
C(i, j) = C(i, j) + \text{ALPHA} \ast A(i, k) \ast B(k, j)
\]
**Example 3.** Given a program that computes the product of an array \(arr\) with a scalar value \(v\), the correct TACO implementation is:
\[
arr(i) = arr(i) \ast v.
\]
TF-Coder synthesizes a solution that, although syntactically valid in TensorFlow, adds \(arr\) to itself, which is not semantically equivalent to the original program:
\[
\text{tf.add}(arr, arr)
\]
TACO-generated programs have a particular code structure that does not reflect real-world programming styles, which is why NMT fails to generalize. Semantic hallucinations are the cause of all of NMT’s failures.
7.3 Generation Time
**Artificial.** NMT is by far the fastest approach with a geometric mean of 0.36 seconds. NMT is faster because it does not involve an extensive search and it does not check whether the program is correct using IO examples, which represents the largest part of the synthesis time for the program synthesis approaches. ChatGPT is also fast for the same reasons and translated artificial benchmarks within 1.14 seconds on average.
Despite performing a search, TF-Coder is fast, taking an average of 1.18 seconds to find the solution. Nevertheless, TF-Coder is only able to correctly lift 40% of the artificial benchmarks (Section 7.1). ETS is the slowest method with an average of 238 seconds to find the solution. In contrast, C2TACO takes an average of 21 seconds. That result shows the impacts of the program features obtained by syntactic analyses in guiding the synthesizer to find the correct answer.
Real-World. Figure 7 shows the synthesis times for each of the five approaches across the real-world collection. Numbers are on a logarithmic scale. As expected both the neural approaches NMT and ChatGPT are fast and stable across all programs. NMT always returns a program in less than 1 second and ChatGPT takes a maximum of 4 seconds to find a solution. However, as shown in Section 7.1, this speed comes at the expense of frequently generating wrong code.
TF-Coder performs well on the simpler program. It synthesized a solution even faster than neural approaches in 15 cases. Nevertheless, the generation time of TF-Coder rose sharply as the programs became less trivial and it timed out in 42 out of 61 instances. ETS is slower on average however it only times out on 13% of the benchmarks. We observed that ETS particularly struggles with instances of length \(N \geq 3\) and programs involving multiple multidimensional tensors, where the number of possible index expressions increases exponentially for each tensor. C2TACO is considerably faster with an average synthesis time of 5.6 seconds and a maximum of 7 minutes. The only cases where C2TACO was slower than ETS involve very simple programs that only perform initialization of arrays with a constant value. For all the other programs C2TACO was able to find a solution faster than ETS and it kept stable across the whole suite.
7.4 Performance of Lifted Code
The main reason we wish to lift code to TACO is to exploit its portable performance. We generated C and CUDA versions of the programs generated by C2TACO and measured their performance on a multi-core CPU and GPU respectively. Figure 8 shows the speedup across the benchmark suite achieved running lifted programs. Baseline is the original implementation compiled with \texttt{gcc -O3}. Only the real-world benchmarks are considered as the artificial ones are directly derived from the TACO compiler and the synthesized version corresponds to the original.
Lifted programs are faster than their original counterparts in both devices. On a multi-core device, the benchmarks are on average 1.79x faster when lifted to TACO. That speedup varies over different benchmark sources. The highest speedup is 5.33x on DSPStone benchmarks and the lowest is 1.21x for the darknet programs. The main reason for the better performance is that the kernels generated by TACO optimize array access by linearizing index expressions and exploit the parallel nature of a multi-core CPU by inserting OpenMP pragmas on loops.
Speedup is even higher on the GPU. The lowest value was 1.23x on the makespeare set. However, it is worth emphasizing that makespeare contains only 1 program. We noticed high speedups on the digital signal processing benchmarks: DSP, DSPStone, and UTDSP, on which lifted programs are 54.46x, 40.96x and 53.81x faster than the original version. The highest value occurs on the BLAS benchmarks, which run 105.7x faster when lifted. The overall speedup achieved on GPU was 24.11x. Similarly to the multi-core kernels, TACO-generated CUDA kernels are designed to leverage high-level parallelism on GPU accelerators and are optimized aiming to divide the workload uniformly among threads.
Speedup by Program Complexity. We further evaluated the impact of lifting on the performance of programs when such programs become more complex. In our domain, we consider programs more complex as they manipulate tensors with higher orders. We define the concept of dominant order as the highest order among the tensors in a program. For example, the program shown in Figure 1, manipulates tensors of 3 different orders: vectors (order 1), a matrix (order 2) and a scalar variables (order 0). The dominant order for that program is therefore 2.
Table 2 shows the overall speedup obtained on programs with different dominant orders. We observed two categories of dominant orders in the real-world benchmarks, 1 and 2. Programs that handle two-dimensional tensors benefit more
from being lifted than the ones operating on one-dimensional ones. The speedup goes from 1.41x to 3.20x on the multi-core and from 20.19x to 36.97x on the GPU. These results show that the impact of lifting is even higher for programs that are more complex in the sense that they manipulate multi-dimensional tensors.
### 7.5 Summary
Overall C2TACO was the most effective method in our evaluation, lifting 95% with an average time of 21 seconds on the artificial suite and 5.6 seconds on the real-world programs. C2TACO was considerably faster than its ETS counterpart, which illustrates that the program analysis used by C2TACO to guide the search shown have a large impact on its generation time. We shown that we obtain performance gains by lifting programs to TACO, achieving an average speedup of 1.79x on a multi-core platform and 24.1x on a GPU.
### 8 Related Work
In this section we discuss how our work relates to the area of program synthesis and other techniques to automatically construct code.
#### 8.1 Program Synthesis
Program synthesis is a well-studied area where programs are generated based on an external specification. It is the form of specification and the methodology used to generate programs that characterize the different approaches.
**Logic.** In Syntax-Guided Synthesis (SyGuS) [10] approaches, the program specification is provided in the form of first-order logic. This type of specification allows SMT solvers such as Z3 [25] to be used in a CounterExample Guided Inductive Synthesis (CEGIS) [63] loop to rapidly synthesize candidate programs. Recent work allows extension beyond first-order logic [51], but SyGuS is not well-suited to tensor computations due to the complexity of checking the correctness of a tensor computation using an SMT solver. Due to this limitation, our work uses a testing-based procedure to validate candidates. Our synthesis approach is similar in style to CEGIS(T) [3], in that we enumerate programs with symbolic constants and tensors, and then find the bindings for these constants as part of the correctness check.
**IO Examples.** IO-based synthesis is part of the programming by example style of synthesis, in which input/output examples are used as the specification. Early work looked at generating Excel commands from a few examples [31]. The same concept and has been used for other tasks [21, 73], including generating PyTorch or TensorFlow code from tensor inputs [46, 59]. TF-Coder [59] takes as input a single user-provided example to generate equivalent TensorFlow code using type constraints and bottom-up enumerative synthesis. Alternative schemes [16, 46] use deep learning models trained on IO samples to guide the generation of code.
**Verified Lifting.** Using program synthesis to generate programs from a specification is a long-studied area [28, 61]. Using a low-level program as the specification and a high-level-one as the target was tackled by Kamil et al. [34]. Here appropriate stencil-like loops in FORTRAN are lifted to their equivalent in Halide [52]. This has been extended to a more generic LLVM framework [4] based on a common IR. While this has the potential to allow lifting to multiple targets [5–7], it requires the compiler writer to provide a compiler and decompiler from each potential source and target into the IR which is not scalable. Their technique also relies on being able to formally verify the equivalence of the target and source programs in order to give counterexamples to the synthesis algorithm, which we have found is not possible for programs in our benchmark suite. In contrast, whilst it gives weaker guarantees of correctness, our approach is able to synthesize programs based on observational equivalence, and the scalability of our approach is not dependent on the tractability of the equivalence checking problem.
#### 8.2 Other Approaches
**Neural Machine Translation.** Since the advent of sequence to sequence models [64], neural machine translation has been applied to programming language translation tasks [11, 12, 26], including unsupervised settings [13, 55, 65]. Training data is often extracted from coding websites [42].
Other tasks range from code style detection [50], generating accurate variable names [41], correcting syntax errors and bugs [32, 57] code completion [36] and program synthesis [14] to API recommendation [35], and specification synthesis [43]. While powerful, such approaches are inaccurate and are not mature enough for precise lifting.
**API Migration/Matching.** Replacing matched code/IR to a fixed API call is a limited form of raising. KernelFailer [24] works at the program level and restricts its attention to just GEMM API targets, but is more robust than IDL [30] matching significantly more user code. This robustness is extended further by Martinez et al. [44] which uses behavioral equivalence to match code. Such approaches, however, are intrinsically limited as they focus on fixed APIs rather than the open-ended nature of DSLs and their IRs.
### Table 2. Speedup obtained given different tensor dominant orders. We consider the highest order among the tensors in a program as dominant.
<table>
<thead>
<tr>
<th>Dominant order</th>
<th>Multi-core Speedup</th>
<th>GPU Speedup</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.41</td>
<td>20.19</td>
</tr>
<tr>
<td>2</td>
<td>3.20</td>
<td>36.97</td>
</tr>
</tbody>
</table>
Compiling TACO. TACO [37] is a popular DSL for expressing tensor computations. In addition to generating high-performance CPU code [38], it has been extended to compile to GPUs [58], CGRAs [33], high-performance libraries [15] and distributed systems [70]. In addition to these target-specific optimizations, work has been done for sparse tensors [8, 71].
9 Conclusion
This paper presents C2TACO, a synthesis tool for lifting C tensor code to TACO. C2TACO uses equivalence behavior and program analysis to generate code and it is shown to lift more programs in a shorter time with greater accuracy when compared to an alternative NMT and simpler synthesis approaches. C2TACO also outperforms existing techniques, lifting 95% of the benchmarks, against 32% for TF-Coder and 24% for ChatGPT. We demonstrate that the synthesis of equivalent TACO programs is feasible for a range of C programs taken from software libraries and benchmark suites. We also show that we can obtain significant performance improvement over the original source. Using C2TACO we are able to synthesize TACO programs that are 1.79x faster when evaluated on a multi-core CPU and 24.1x when ported to a GPU platform. Future work will explore methods to further improve lifting applicability, by handling sparse tensor algebra, and efficiency using neural-guided synthesis to perform search.
References
C2TACO: Lifting Tensor Code to TACO
Received 2023-07-14; accepted 2023-09-03
|
{"Source-Url": "https://www.pure.ed.ac.uk/ws/portalfiles/portal/376980317/C2TACO_SOUZA_MAGALHAES_DOA03092023_AFV_CC_BY.pdf", "len_cl100k_base": 11893, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 54827, "total-output-tokens": 17929, "length": "2e13", "weborganizer": {"__label__adult": 0.00041031837463378906, "__label__art_design": 0.0002989768981933594, "__label__crime_law": 0.0002961158752441406, "__label__education_jobs": 0.0005044937133789062, "__label__entertainment": 6.538629531860352e-05, "__label__fashion_beauty": 0.00017011165618896484, "__label__finance_business": 0.0001881122589111328, "__label__food_dining": 0.00036215782165527344, "__label__games": 0.0005574226379394531, "__label__hardware": 0.0011234283447265625, "__label__health": 0.0004982948303222656, "__label__history": 0.00022077560424804688, "__label__home_hobbies": 0.00010251998901367188, "__label__industrial": 0.0004000663757324219, "__label__literature": 0.00024259090423583984, "__label__politics": 0.0002856254577636719, "__label__religion": 0.0005502700805664062, "__label__science_tech": 0.017242431640625, "__label__social_life": 9.03606414794922e-05, "__label__software": 0.004222869873046875, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.0003132820129394531, "__label__transportation": 0.0006213188171386719, "__label__travel": 0.00019359588623046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66332, 0.04058]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66332, 0.38062]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66332, 0.85383]], "google_gemma-3-12b-it_contains_pii": [[0, 1447, false], [1447, 5796, null], [5796, 10594, null], [10594, 13247, null], [13247, 16174, null], [16174, 21362, null], [21362, 24261, null], [24261, 29573, null], [29573, 34346, null], [34346, 36584, null], [36584, 40347, null], [40347, 44340, null], [44340, 49729, null], [49729, 56571, null], [56571, 64019, null], [64019, 66332, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1447, true], [1447, 5796, null], [5796, 10594, null], [10594, 13247, null], [13247, 16174, null], [16174, 21362, null], [21362, 24261, null], [24261, 29573, null], [29573, 34346, null], [34346, 36584, null], [36584, 40347, null], [40347, 44340, null], [44340, 49729, null], [49729, 56571, null], [56571, 64019, null], [64019, 66332, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66332, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66332, null]], "pdf_page_numbers": [[0, 1447, 1], [1447, 5796, 2], [5796, 10594, 3], [10594, 13247, 4], [13247, 16174, 5], [16174, 21362, 6], [21362, 24261, 7], [24261, 29573, 8], [29573, 34346, 9], [34346, 36584, 10], [36584, 40347, 11], [40347, 44340, 12], [44340, 49729, 13], [49729, 56571, 14], [56571, 64019, 15], [64019, 66332, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66332, 0.04491]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
a4c9789888170655eb8b5f5612f8b30af9b59e0e
|
OpenGL ES 2.0 Performance Guidelines for the Tegra Series
Version 0.2
## Contents
INTRODUCTION .................................................................................................................................................. 3
BASIC PERFORMANCE NOTES ...................................................................................................................... 3
MAXIMIZING THE GPU AND CPU/GPU PARALLELISM ............................................................................. 4
- Avoid Redundant State Changes .................................................................................................................. 4
- Avoid CPU-GPU Pixel Transfers .................................................................................................................. 6
- Avoid CPU-processed Vertices ...................................................................................................................... 6
- Maximize Geometry per API Call .................................................................................................................. 6
VERTEX SHADER PERFORMANCE ............................................................................................................. 6
- Optimally Feeding the Vertex Shader .......................................................................................................... 7
- Vertex Shader Guidelines and Optimizations ................................................................................................. 8
- Character Skinning and the Vertex Unit .......................................................................................................... 8
FRAGMENT PERFORMANCE ........................................................................................................................ 9
- High-Level Fragment Performance Guidelines ............................................................................................. 9
- Data-Related Fragment Guidelines .............................................................................................................. 9
- Pre-Shader Fragment Guidelines and Optimizations ..................................................................................... 11
- Fragment Shader Guidelines and Optimizations .......................................................................................... 13
- API-Level Fragment Recommendations ...................................................................................................... 19
MEMORY BANDWIDTH ................................................................................................................................... 20
- High-Level Memory Bandwidth Guidelines ................................................................................................. 20
- Examples ..................................................................................................................................................... 20
- Other consumers of memory bandwidth .................................................................................................... 22
Introduction
NVIDIA’s Tegra mobile system on a chip (SOC) includes an extremely powerful and flexible 3D GPU whose power is well matched to the OpenGL ES 2.0 APIs. For optimal content rendering, there are some basic guidelines and several tips that can assist developers in reaching their goals. This document will detail these recommendations, as well as a few warnings regarding features and choices that can limit performance in 3D-centric applications.
The 3D GPU in the Tegra series SOC contains a programmable vertex shading unit and a programmable fragment shading unit, each of which are accessible via OpenGL ES 2.0’s GLSL-ES shading language. Tegra also includes a high-performance multi-core ARM Cortex A9 CPU and a high-bandwidth memory controller (MC) to round out the components of 3D rendering.
Optimal performance is achieved by:
1) Maximizing the efficient use of the fragment shading unit and vertex shading unit via smart shader programming
2) Minimizing the use of the CPU by avoiding redundant and ill-optimized rendering methods.
3) Optimizing the use of memory bandwidth across the fragment unit, vertex unit and display systems.
This document will cover aspects of all of these elements. Note that all quoted numbers are relative to common clock settings on the Tegra 250. Numbers on other Tegra variants may differ slightly.
Basic Performance Notes
In real-world applications, the most common performance bottlenecks are:
1) Fragment fill rate for applications using long shaders and/or lots of overdraw
2) Memory bandwidth on devices with large screens or when using large/deep textures
3) Lack of CPU/GPU parallelism for applications that use redundant or GPU-unfriendly OpenGL ES code
Note that the vertex shader unit is not in this list. The vertex shader unit in Tegra is extremely powerful, and is rarely the bottleneck in current mobile 3D applications (however, note the following sections on how to best keep the vertex shader unit busy).
Maximizing the GPU and CPU/GPU Parallelism
The most common initial performance issues in 3D apps tend to involve causing the driver to do needless work or doing work in the app (on the CPU) that can be better done on the GPU.
Avoid Redundant State Changes
Avoid redundant state changes to the driver (e.g. glEnable/glDisable). There are several common cases:

Do not “Push and Pop”
Do not “push” and “pop” render state e.g. during a scene graph traversal; every render state change should be directly related to a draw call. Often, push/pop-style behavior can lead to cases such as the following (see the simple scene graph above):
- Set state to an initial value A at the start of the frame based on the root.
- Set state to B and traverse down into object (driver must flag a change)
- Draw object with state B
- Step up the tree, out of the object and reset the state to A (driver must flag a change)
• Set state to B again and traverse down into another object (driver must flag a change)
• Draw object with state B
In this case, both objects were drawn with the driver having to at least process the changed state to determine that it hadn’t actually changed – it was B in both draw calls. Associate state with drawable objects and set accordingly.
Avoid Setting Entire “Materials” on Each Draw Call
Do not send down every render state to GLES on every draw call and assume the driver will test for unchanged values. Use high-level app knowledge to send down only those that have changed, since this can often be done with far fewer comparisons at a higher level.
Avoid Changing Expensive States
Know which states are particularly expensive, and do not change them very frequently. Particularly expensive states include:
• **glUseProgram**: changing shader programs can be very expensive, as the shader program is responsible (according to the GLES spec) for storing and restoring the state of all of its uniforms (shader constants). The more uniforms in the shader, the more expensive swapping will be. Also, shaders are programs, and swapping them can cause significant performance issues
• **Some texture formats**: When using runtime-compiled shaders, switching between non-floating-point and floating-point texture formats used with a given shader can cause a driver-level shader change and perhaps a recompile.
• **Alpha/Pixel blending mode**: When using runtime-compiled shaders, switching pixel blending modes used with a given shader can cause a driver-level shader change and perhaps a recompile. This is one case where it may be worthwhile to have independent versions of a shader, one for each blended (and the non-blended) mode, and use a fixed blending mode with each copy.
• **Buffer masking**: When using runtime-compiled shaders, switching buffer masking modes used with a given shader can cause a driver-level shader change and perhaps a recompile
Consider State-Sorted Rendering
Where possible, accumulate your drawable objects into sets, grouped by expensive states like shader program, and render all objects with those same states together, changing state only at
the start of each different set, not each object. This form of state gathering can also be useful for analysis.
Avoid CPU-GPU Pixel Transfers
Avoid the following functions on a per-frame basis, as they use memory bandwidth and can stall the rendering pipeline, minimizing GPU/CPU parallelism:
- `glReadPixels`
- `gl*TexImage*`
- `glBuffer*Data`
Avoid CPU-processed Vertices
Processing vertices on the CPU is sub-optimal for several reasons:
- It uses the CPU for work that is better-suited to the GPU’s vertex unit
- It leaves the powerful GPU vertex unit underworked
- It requires transferring the transformed vertices to OpenGL ES each frame.
Thus, it is best to rework older CPU-based vertex transforms and deformations (such as those required with OpenGL ES 1.x’s restrictive pipeline) into vertex shaders. This can allow for a range of optimizations, since vertex shaders on Tegra can utilize a wide range of data types directly (float, half-float, byte, short, etc). This can allow for smaller vertex data than would have to be kept around for CPU-based vertex processing.
Maximize Geometry per API Call
For most applications, peak performance will be attained by feeding as much rendering data as possible to OpenGL ES 2.0 with the lowest number of API calls. Geometry should be sent in large batches, to ensure maximal CPU-GPU parallelism and to allow the GPU pipeline to run at maximum efficiency.
Vertex Shader Performance
Tegra’s vertex shader unit is extremely powerful and flexible. It fully supports conditionals and looping. It is capable of transforming vertices at a rate of over 42M vertices per second. The most important methods of increasing vertex shader performance and utilization are to use it for as much processing as reasonably possible and to feed it well.
Optimally Feeding the Vertex Shader
- Use indexed geometry (glDrawElements) to maximize the re-use of coincident vertices. This optimizes the use of the vertex unit and minimizes memory bandwidth use.
- Use VBOs and IBOs (index buffers) for all geometry to maximize parallelism.
- If you need to use “dynamic” (CPU-processed) vertices, put them in VBOs as well.
- Mark as many VBOs as possible with GL_STATIC_DRAW and do not rewrite them.
- Prefer fewer independent vertex attributes. Instead, pack multiple vertex attributes into fewer, wider vertex attributes (e.g. a pair of 2D UV sets packed into one 4D attribute).
- Use smaller-sized vertex attribute types where possible. Tegra supports byte and short attributes as well as 16- and 32-bit half-float and float attributes. Packing normals into 32 bits (X8Y8Z8<unused>8) can reduce vertex normal member bandwidth by 3x (3x32 becomes 4x8) with little or no reduction in image quality. This minimizes memory bandwidth use.
VBOs
Maximum performance on Tegra is achieved only when all vertex attributes and primitive indices are read from Vertex Buffer Objects (VBOs). Even dynamic geometry should be stored in VBOs. In most cases, only a few attributes of a dynamic primitive are actually dynamic (often positions and normals). These attributes should be uploaded to an interleaved vertex buffer each time the data is changed. However, the static attributes of that same object (e.g. texture coordinates, per-vertex colors) should be packed into another (static) VBO. This allows all of the vertex attributes of the dynamic object to be read from VBOs while avoiding having to reload the static vertex attributes every frame, as would be the case if a single VBO was used for all of the object’s vertex attributes (static and dynamic).
When using a VBO for dynamic vertices, it is important to avoid an immediate cycle of glBuffer*Data/glDraw*/glBuffer*Data/... with the same buffer. Locking a VBO right after a rendering call using that buffer limits CPU-GPU parallelism and can block the app. Use at least a pool of round-robin VBOs for dynamic objects (reusing in least-recently-drawn fashion), and if possible, use an independent VBO per dynamic object. This avoids stalling the app to wait for a pending draw call using the same VBO.
Vertex Shader Guidelines and Optimizations
Keep the Vertex Shader Busy
Analyze the values that are computed in the vertex+fragment shader effect as a whole. Move per-fragment operations that are constant, linear, or “near linear” across a triangle from the fragment unit to the vertex unit. This can minimize per-fragment work, cut down on varying data elements used to communicate between the vertex and fragment unit and utilize the powerful looping and conditional support in the vertex unit.
Character Skinning and the Vertex Unit
Moving character skinning from the CPU to the GPU is a perfect way to offload the CPU and lower memory bandwidth. OpenGL ES 2.0 makes dynamic character skinning possible on the GPU even if the skinning method does not fit the “basic bone-palette” limitations. Even more complex skinning can be done on the GPU (e.g. bone skinning and morph deformations). By moving all skinning the GPU, we can also avoid using dynamic vertex buffers, since all of the source data (except matrices) can be static. However, there are a few recommendations for character skinning on the GPU:
- Analyze the use of bone matrices per object and avoid passing down unused bone matrices as uniforms for a given object.
- Analyze bone weights per vertex offline and cull bones with inconsequential weights.
- Since bone matrices are normally rigid transforms, consider using 3x4 matrices (a set of 3 4-vector uniforms) to represent each as a 3x3 rotation+scale and a 3D translation rather than 4x4 matrices for bones, especially if the bone palette is large. Then the final transform from world or post-deformed model space to clip space can be a single 4x4 matrix. This can cut the number of 4-vector uniforms per vertex shader by 25%.
- If multiple sub-sections of a character are to be drawn with the same shader, but each with different rendering settings, consider setting the shader and its bone transform uniforms once, then interleave texture and render state changes with sub-mesh draw calls without ever changing the shader or the bone uniforms. This can greatly lower the overhead of the sub-object rendering. In this case, since the entire character’s palette of bone matrices can be sent down once, it is fine that each subsection of the mesh does not use all of the bones.
- Carefully analyze the performance of multi-pass rendering algorithms with complex GPU skinning, since GPU skinning is computed for each rendering pass, not once per frame.
Fragment Performance
High-Level Fragment Performance Guidelines
The 3D engine clock on Tegra 250 platforms is typically set to 300Mhz. The fragment shader performance is measured in the number of fragments rendered per second, also called the fill rate. However, it is often more useful to express performance in a number of 3d engine clock cycles per one rendered fragment. The fragment shader unit can produce shaded fragments at a maximum of one fragment per 3D clock. This translates (at typical clock) to a maximum fill rate of 300M fr/s for a single-cycle shader.
It is worth mentioning that early depth and stencil tests are performed before the fragment shader with a speed of 4 fragments per clock. This means that fragments can be culled by depth and stencil tests 4x faster than they can be rendered. Fragments can be culled as fast as 1.2G fr/s with 300Mhz clock. Using depth and stencil tests is strongly recommended in application with high redraw rate (ratio of rendered fragments to modified pixels). However, note that some fragment features like conditional discard will disable early depth and stencil tests and lower the rate of fragment rejection. See the details in the later section on depth and stencil kill, as well as the section on discard.
Note that glClear does not use 3D engine (it uses 2D hardware instead). 550M cleared pixels per second are possible in practice, so clearing with 3D polygons is not recommended unless the scene geometry itself covers the entire screen (e.g. an interior scene).
Data-Related Fragment Guidelines
Texturing
Texture Formats
Where possible, use texture formats with the lowest number of bits per pixel that will fulfill the needs of the source artwork and its use in the shader. Low bit-per-texel formats include:
- **RGB**: DXT1, ETC
- **RGBA**: DXT3, DXT5
- **Single-channel** (luminance or alpha): GL_LUMINANCE, GL_ALPHA
- **Two-channel** (luminance and alpha, 2D vector): GL_LUMINANCE_ALPHA
The availability of fragment shaders makes it easier to use the single- and dual-channel textures in creative ways. These can lower an application’s memory footprint and increase performance.
The basic guidelines are:
<table>
<thead>
<tr>
<th>Formats</th>
<th>Bits per Texel</th>
</tr>
</thead>
<tbody>
<tr>
<td>GL_COMPRESSED_RGB_S3TC_DXT1_EXT</td>
<td>4</td>
</tr>
<tr>
<td>GL_COMPRESSED_RGBA_S3TC_DXT1_EXT</td>
<td></td>
</tr>
<tr>
<td>GL_ETC1_RGB8_OES</td>
<td></td>
</tr>
<tr>
<td>GL_COMPRESSED_RGBA_S3TC_DXT3_EXT</td>
<td>8</td>
</tr>
<tr>
<td>GL_COMPRESSED_RGBA_S3TC_DXT5_EXT</td>
<td></td>
</tr>
<tr>
<td>GL_LUMINANCE, GL_UNSIGNED_BYTE</td>
<td>8</td>
</tr>
<tr>
<td>GL_ALPHA, GL_UNSIGNED_BYTE</td>
<td></td>
</tr>
<tr>
<td>GL_UNSIGNED_SHORT_4_4_4_4</td>
<td>16</td>
</tr>
<tr>
<td>GL_UNSIGNED_SHORT_5_5_5_1</td>
<td></td>
</tr>
<tr>
<td>GL_UNSIGNED_SHORT_5_6_5</td>
<td>16</td>
</tr>
<tr>
<td>GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE</td>
<td>16</td>
</tr>
<tr>
<td>GL_LUMINANCE, GL_HALF_FLOAT_ARB</td>
<td></td>
</tr>
<tr>
<td>GL_RGB, GL_UNSIGNED_BYTE</td>
<td>32 (see note)</td>
</tr>
<tr>
<td>GL_RGBA, GL_UNSIGNED_BYTE</td>
<td>32</td>
</tr>
<tr>
<td>GL_LUMINANCE_ALPHA, GL_HALF_FLOAT_ARB</td>
<td>32</td>
</tr>
<tr>
<td>GL_RGB, GL_HALF_FLOAT_ARB</td>
<td>48</td>
</tr>
<tr>
<td>GL_RGBA, GL_HALF_FLOAT_ARB</td>
<td>64</td>
</tr>
</tbody>
</table>
The difference between an RGBA texture using DXT and using half-float is 8x!
**Note:** Tegra does not directly support 24 bit per pixel RGB textures. These are expanded by the driver at specification time to 32 bit per pixel RGBX textures. No device memory is saved with these formats, and the reformatting process at specification time requires driver work on the CPU.
**Texture Filtering**
Mipmapping can generally improve performance of texture-heavy applications if any of the mipmap-enabled, non-trilinear filtering modes are used for “minification” filtering. These include **GL_NEAREST_MIPMAP_NEAREST**, **GL_LINEAR_MIPMAP_NEAREST**, and **GL_NEAREST_MIPMAP_LINEAR**.
Trilinear filtering mode (**GL_LINEAR_MIPMAP_LINEAR**) is more expensive than the other mipmapped modes and should be used where proven to be needed visually.
In addition, using anisotropic filtering via **GL_TEXTURE_MAX_ANISOTROPY_EXT** can further decrease performance and should be used sparingly; only where it is needed.
Fragment Data Types
Tegra supports two levels of fragment variable precision: fp20 (an s.6.13 floating-point format) and fx10 (two’s complement s.1.8 format). Tegra can efficiently store twice as many temporaries, varyings and uniforms in fx10 format than in fp20.
- **highp** and **mediump** precision variables are both interpreted by the compiler as 20-bit floating-point values (fp20)
- **lowp** precision variables are interpreted as fixed-point 10-bit values (fx10). As fx10 can only store values of range (-2, 2), it is typically used only for color computations and normalized values (e.g. perfect for blending). Floating point precision is usually required for storing coordinates (e.g. interpolated texture coordinates).
**Note:** Please note that currently, the compiler falls into a non-optimal case when directly-used texture UVs are declared as lowp. Avoid this by keeping UVs as highp.
Minimizing the number of actively-used hardware vector registers at any point in a shader is important on Tegra for maximum performance. Registers are consumed by actively-used varying and temporary variables in the shader. A register or sub-section of a register can be used by several variables if those variables have non-overlapping lifespans. The shader compiler actively optimizes these. Best performance will be found by limiting the number of actively-used variables (temporaries and uniforms) at any given time. Note that a register can hold either one fp20 variable or two fx10 variables, so use of lowp will help maximize register usage.
Pre-Shader Fragment Guidelines and Optimizations
EGL Configurations
Certain EGL buffer configurations can cause performance degradation, including some surprising cases. The following sections detail this.
**32 BPP versus 16 BPP**
Note that owing to a quirk in the EGL specification, requesting a 16bpp RGB rendering buffer via eglChooseConfig will return 24- or 32bpp rendering configs (if available) before any 16bpp configs. Thus, it is safest to have EGL return a long (16 or 32 element) array of configs and sort/search them manually for the best-fitting config.
**Note:** Selecting a 32bpp config with a 16bpp screen format (or vice-versa) can result in decreased eglSwapBuffers performance due to the format conversion required.
Coverage Buffers
If an application includes a coverage buffer request (or for any reason uses a config that includes a coverage buffer), then buffer swapping costs can be slightly increased. Note that Coverage Sampled AA is on by default if there is a coverage buffer in the EGL config, so having a coverage buffer can lower peak fragment performance without the application setting any specific rendering states. Keep this in mind when choosing a rendering config. The EGL configuration values in question are:
EGL_COVERAGE_BUFFERS_NV
EGL_COVERAGE_SAMPLES_NV
Depth and Stencil Kill
As mentioned previously, Tegra can reject fragments via depth and/or stencil testing at 4x the peak fragment shading rate. Thus, it is best to use depth or stencil rejection when possible to increase practical fragment throughput.
Depth-Kill
Tegra can reject fragments via depth-testing at a very high rate. As a result, applications that can render opaque parts of a scene even roughly front-to-back with depth testing enabled can see a performance improvement. This is especially true if possibly-occluded objects with expensive fragment shaders can be drawn last.
If the application uses particularly complex fragment shaders with a large amount of overdraw, then even if front-to-back sorting is not feasible, the application can see higher performance using an initial depth-only rendering pass with the color buffer masking set to GL_FALSE. For optimal performance, applications should consider using a custom, almost null fragment shader for this pass. However, applications using runtime source-code shaders will see a performance boost by setting the color masks to GL_FALSE during the depth pre-pass, as the online shader compiler should substitute a trivial shader.
Stencil-Kill
Stencil-killed fragments are generally the fastest rejection cases possible, as they are 8-bit, rather than 16-bit surfaces. Stencil killing for depth complexity minimization can be more complex in terms of application setup code, and some datasets simply cannot sort geometry in this way. However, if static geometry is available pre-sorted, stencil-kill can provide maximum performance. Applications that are fill-limited and have high per-pixel fragment depth should
consider stencil-killed front-to-back rendering with depth-testing disabled. In some cases, 2D UIs done in OpenGL ES are good examples of this.
**Fragment Shader Guidelines and Optimizations**
**Understanding Lower Bounds**
There are a number of ways to approximate a lower bound on the number of clocks required to render a fragment. These can assist in optimizing shaders. We will think of the fragment shader unit in terms of a set of pipelined “sub-units” that do different fragment-related functions. The most important sub-units are the raster sub-unit, the texture sub-unit, and the ALU sub-unit. The max number of cycles between these units is a (very) rough lower bound, although obviously dependencies between the units (ALU needing a texture lookup, texture coords needing ALU computations) can raise these:
**Raster Sub-Unit**
The raster sub-unit can generate up to eight varyings in one cycle, grouped into four “slots”. A single slot cannot hold parts of more than one varying, so it is best to group scalar lowp varyings into vector varyings!
Slot counts:
- 1 slot:
- lowp float (wasteful; merge with another scalar lowp float into a vec2)
- lowp vec2
- highp float
- 2 slots:
- lowp vec3 (wasteful; merge with another scalar lowp float into a vec4)
- lowp vec4
- highp vec2
- 3 slots:
- highp vec3
- 4 slots (a full cycle):
- highp vec4
- 2 lowp vec4’s
**Texture Sub-Unit**
The texture sub-unit can retrieve a:
- Non-floating-point RGBA texture in one cycle
- Floating-point A or LA texture in one cycle
Floating-point RGBA texture in \textit{two} cycles
The fact that the texture sub-unit generates at most one texture sample per cycle also means that there is no need to interpolate more than one directly-used texture coordinate in a single varying. This can make a difference in cycle counts. For example, the shader:
```glsl
varying vec2 uv;
varying vec2 uv1;
varying lowp vec4 color;
uniform sampler2D tex0;
uniform sampler2D tex1;
void main()
{
gl_FragColor = texture2D(tex0, uv) * (color + texture2D(tex1, uv1));
}
```
May take fewer cycles than
```glsl
varying vec4 uv;
varying lowp vec4 color;
uniform sampler2D tex0;
uniform sampler2D tex1;
void main()
{
gl_FragColor = texture2D(tex0, uv.zw) * (color + texture2D(tex1, uv.xy));
}
```
But performance testing will confirm. Consider this when packing varying values.
\textbf{ALU Sub-Unit}
The ALU sub-unit can execute up to four independent scalar MAD’s (Multiply-Adds) per clock (actually, these are technically Multiply-Multiply-Adds with limitations), i.e.
General case:
\[ x = a \times b + c \]
But with some limitations on \( d \), it can do:
\[ x = a \times b + c \times d \]
where \( d \) is equal to \( b \) or \( c \), modulo the free register modifiers listed in a later section (e.g. (1-\( d \)).
Thus, there is no way to do more than 4 adds in one cycle, so the number of adds required to render a fragment divided by 4 is the lower bound on the cycle count.
However, the ALU units can do a few other related operations. Instead of 4 MADs, some other operations are possible in a single cycle, such as:
- 4-vector dot product
• 3-vector dot product (plus scalar addition)
Generally, it is only possible to do 4 multiplications per cycle. With certain constraints it is possible to do limited cases involving 6 or 8 multiplications per clock, owing to the way that the 4 independent MADs work. But in general, the number of multiplication results required to render a fragment divided by 4 is a lower bound on the cycle count.
Immediate values (numeric constants compiled into the shader) other than 0.0 and 1.0 use ALU unit cycles. While the ALU can do MADs in the same cycle as immediate values, the ALU unit can only generate three fp20 (or three pairs of fx10 or any combination thereof) per cycle and limit the number of MADs in that cycle to 3.
**Multi-Unit Operations**
Scientific functions (e.g. sin, sqrt) and 1/x can take more than one cycle and more than one unit and can complicate lower bound computations.
**An Example**
As an example, consider the shader seen earlier:
```glsl
varying vec2 uv;
varying vec2 uv1;
varying lowp vec4 color;
uniform sampler2D tex0;
uniform sampler2D tex1;
void main() {
gl_FragColor = texture2D(tex0, uv) * (color + texture2D(tex1, uv1));
}
```
Analyzing it, we see:
• There are two texture loads, so the min cycle count for the texture sub-unit is 2
• There is a 4-vector color multiply, and a dependent 4-vector color add. Thus, two ALU cycles are the minimum
• There are three varying vectors needed. However, color and uv1 each take two slots, thus it may be possible to compute color and uv1 in the same cycle, and then uv in another cycle. So a minimum of 2 cycles are required in the raster sub-unit.
This indicates that across the board, this shader could require two cycles. Using a current shader compiler as of the writing of this document, the shader was compiling to two cycles.
Fragment Shader Tips and Recommendations
Utilize Free Input Operand Modifiers
Tegra can modify the values of fragment math operands “for free” in some key cases. This includes such modifiers as: (-x), (x-1), (x/2), (x*2). The compiler will apply these automatically as it can, so simply be aware of their existence. The structure is such that a given source operand can do the following (or skip them) in the given order:
1. Scale by 2x
2. Subtract 1.0
3. ABS
4. Negate
Thus, (2x-1) can be a free modifier (if the compiler does not need the modifiers on the operand for another transformation…). For example, 4-component blending as follows:
\[
\begin{align*}
\text{newDest.rgb} &= \text{oldDest.rgb} \times \text{src.a} + \text{src.rgb} \times (1 - \text{src.a}) \\
\text{newDest.a} &= \text{oldDest.a} \times \text{src.a} + \text{src.a} \times (1 - \text{src.a})
\end{align*}
\]
is possible in 1 cycle, as it can be written as:
\[
\begin{align*}
\text{newDest.rgb} &= \text{oldDest.rgb} \times \text{src.a} + \text{src.rgb} \times (-\text{(src.a - 1)}) \\
\text{newDest.a} &= \text{oldDest.a} \times \text{src.a} + \text{src.a} \times (-\text{(src.a - 1)})
\end{align*}
\]
(since the MAD instructions use src.a multiple times and the (1 – src.a) can be computed from src.a via input operand modifiers)
Utilize Free Result Modifiers
Tegra can modify the values of fragment math operation results “for free” in some key cases. This includes modifiers (x/2), (2x), (4x), (clamp(0, 1)). The compiler will apply these automatically as it can, so simply be aware of their existence. One scaling followed by a clamp can be applied to the result of an operation. So the GLSL function
\[ y = \text{clamp}(x, 0.0, 1.0); \]
can be free if it can be applied to the result of the previous operation. As a result, preferring \(\text{clamp}(x, 0, 1)\) to \(\text{min}(x, 1)\) if applicable (i.e. the additional clamp to zero works for the algorithm) can increase performance.
Avoid Conditional Code
Avoid conditional code in the fragment shader. Especially avoid using uniforms or other input variables to emulate discrete sets of modes. Any discrete set of modes can and should be split into a set of specialized shaders, one per each mode.
Both branches of the “if” conditional are executed to render a fragment. This can make conditional code quite long, especially when conditionals are nested.
If you need to use conditionals it is better (where possible) to express them as ternary ?: operators and GLSL functions that produce binary vectors (e.g. lessThan).
Avoid discard
The aforementioned 4x early Z/S optimization cannot be used if there are “discard” statements in the fragment shader. Note that this is true whether or not particular fragments will be discarded. Existence of paths that can reach discard in a shader will disable the optimization. Avoid using discard in shaders when possible. If you must use discard, consider using buffer-masking functions in GL ES 2.0 to disable buffers such as stencil or depth that need not be written.
Use uniforms instead of numerals
With a few exceptions numerical constants cannot be used directly as operands by Tegra. General immediates/numerals waste some clock cycles as well as precious register space. Uniforms, on the other hand, can be used directly as operands. Instead of using the following:
\[ x = y \times 0.23 + z \times 0.75 - 0.11 \]
it is often better to rewrite as follows:
\[
\text{uniform lowp vec3 c;}
\]
\[
(\ldots)
\]
\[
x = y \times c.s + z \times c.t - c.p
\]
where \( c \) is a uniform initialized to \( \text{vec3}(0.23, 0.75, 0.11) \). This code is also more flexible and reusable than the immediate-based example if there is the possibility of needing to change these values.
The known exceptions are constants 0 and 1, which are free, as well as multiplication by specific modifiers, as discussed previously.
Use some form of Color Masking Where Appropriate
When you render to surface or use and effect that uses/requires fewer than 4 color components consider using some form of color masking. The exact method may differ from case to case as listed in this section. Either \texttt{glColorMask}, an equivalent “pragma” for offline shaders, or a constant assignment of 0 in the alpha channel of the shader may be appropriate. The shader compiler should be able to save some instructions by not initializing unused output components. However, note that using the \texttt{glColorMask} or pragma methods can actually
increase memory bandwidth requirements of a non-pixel-blended shader. Thus, it is best not to mask colors with `pragmas` or `glColorMask` when `glBlend` is disabled. In these cases, it is best to simply use a constant assignment to the unused channels in the shader.
**Note:** Avoid masking colors with `pragmas` or `glColorMask` when `glBlend` is disabled.
**Do not update destination alpha when not needed**
The blending state is often set as follows:
```
GL_ADD: GL_SRC_ALPHA, GL_SRC_ONE_MINUS_ALPHA
```
In this mode, the destination alpha component is updated to the target buffer but it is never used or visualized, which is wasteful of shader cycles. There are several options to minimize this expense in cases where destination alpha is unused:
**Note:** These recommendations are **ALL** based upon the notion that alpha/pixel blending is being used. If the shader in question does not compute pixel blending, then the best method of optimization is to write 0 to the alpha channel in the shader (first option below).
- Writing zero to destination alpha. To do this, call `glBlendFuncSeparate` with `GL_SRC_ALPHA`, `GL_ONE_MINUS_SRC_ALPHA`, `GL_ZERO`, `GL_ZERO`. Generally, this still requires at least one ALU sub-instruction to zero the register representing destination alpha, although in some cases it can be rolled into an existing instruction.
- Leave the destination alpha unchanged. To do this, call `glBlendFuncSeparate` with `GL_SRC_ALPHA`, `GL_ONE_MINUS_SRC_ALPHA`, `GL_ZERO`, `GL_ONE`. This will allocate at least one extra fx10 register between the time the pixel color is loaded and until its color is written back to the surface, but will not require additional ALU instructions.
- Often the most efficient method (and the simplest) is to use `glColorMask`.
Each of these methods can end up being the most optimal, depending on the exact shader being used. In cases with a lot of blending apps should test the options.
**Use Explicit access to last fragment color**
The fragment shader can fetch pixel color from the render target into a variable; this is as efficient as fixed-function alpha blending. The feature is accessed via an NV extension that lets the shader access the old pixel color explicitly. The equivalent of standard alpha blending can be implemented without blending enabled in GL state with the following shader:
```glsl
#extension GL_NV_shader_framebuffer_fetch: enable
uniform lowp col;
void main()
```
December 2012 - 18 -
The extension gives more flexibility for pixel blending than is possible via ES 2.0’s fixed-function alpha blending. This flexibility can sometimes save a few clock cycles per fragment, since blending computation can be rolled into existing shader cycles in some cases.
API-Level Fragment Recommendations
Source-Code Shaders versus Binary Shaders
Runtime source code shaders on Tegra are extremely convenient, as they do not require the application to pre-compile their shaders, nor do they require pragmas to set the blend mode, write-masking, etc. However, there are prices to be paid for this convenience. Runtime shader compilation increases the memory footprint of the application (the shader compiler code and data structures), and significant CPU work is incurred whenever a shader must be compiled.
Shaders must be compiled or recompiled in at least the following cases:
- First rendering use of the shader program
- Each time a new, unique combination of “key” states is used with the shader program.
Key states in this context include all states for which precompiled shaders require pragmas, including:
- Color blending mode
- Write masking
- Texture image formats (unsigned versus signed versus half-float, not component depths or count)
- Framebuffer format (unsigned versus signed versus half-float, not component depths or count)
In addition, using a large number of unique sets of key states with any given shader can lead to additional recompilations in some cases. The number of unique sets of key states used with a given shader should be limited whenever possible.
Memory Bandwidth
High-Level Memory Bandwidth Guidelines
On Tegra, with complex, blended, texture-heavy rendering, maximum memory bandwidth can become a performance bottleneck if care is not taken. Understanding some rough guidelines can help the developer know when they may be bumping up against such limits.
The memory interface is designed to transfer up to 8 bytes per memory controller (MC) clock cycle. This would imply about 2.5GB/s for typical settings of 333Mhz (1G = 1024^3 here). However the efficiency in real-life cases is lower, especially when multiple hardware engines access memory at once. Based on experimental data it is safe to assume between 60-90% efficiency for fragment rendering (1.5GB/s at 60% of 333Mhz). We will use this most pessimistic assumptions for these examples, in order to help form some lower bounds. Actual available bandwidth may be considerably higher than 1.5GB/s.
Memory latency (e.g. texture fetching) is well hidden on Tegra so we only need to calculate the number of memory transfers per fragment to figure the fill rate that maximizes memory bandwidth.
Examples
Here are some examples of shaders and their memory bandwidth requirements.
Example 1: Simplest Shader
We use the following shader:
```c
uniform lowp col;
void main()
{
gl_FragColor = col;
}
```
Assuming that the surface format is RGBA8888 with no depth or stencil writes, the shader writes 4 bytes per fragment so we would hit the bandwidth limit at:
$$\frac{1.5 \text{ GB/s}}{4 \text{ B/fr}} = 402 \text{ M fr/s}$$
Memory bandwidth will not be a bottleneck for this shader (since a 1-cycle shader would be GPU-limited to 300M fr/s).
Example 2: Simplest Blending
We use the same shader as in the first example but we also enable blending, set blend equation to GL_ADD and set blend function to GL_SRC_ALPHA, GL_SRC_ONE_MINUS_ALPHA.
Since the shader uses blending we need to read and write 4 bytes per fragment. Therefore, using the most conservative memory bandwidth assumption, we cannot render fragments faster than:
\[
(1.5 \text{ GB/s}) / (8 \text{ B/fr}) = 201 \text{ M fr/s}
\]
It turns out that the cycle count of the program itself is only one. This shader may be memory bound, since 201M fr/s is considerably lower than the GPU limit of 300M fr/s for a single-cycle shader.
**Example 3: Texturing**
We use the following shader:
```glsl
uniform sampler2D tex;
varying vec2 tcoord;
void main()
{
gl_FragColor = texture2D(tex, tcoord);
}
```
We assume for this example that we render 100x100 quads using 150x150 RGBA textures (one texture per quad) compressed with a ratio of 1:2 (2 bytes/texel, e.g. RGB565). We also use linear texture filtering without mipmapping.
We have to fetch all texture data at least once and, thanks to texture caching, only once. Therefore the average amount of texture data fetched per fragment can be calculated like this:
\[
(150^2 \text{ tx}) * (2 \text{ B/tx}) / (100^2 \text{ fr}) = 4.5 \text{ B/fr}
\]
We also have to write 4 bytes of color per fragment. Therefore, using the most conservative memory bandwidth assumption, the fill rate could not be higher than:
\[
(1.5 \text{ GB/s}) / (4 \text{ B/fr} + 4.5 \text{ B/fr}) = 169 \text{ M fr/s}
\]
Since the shader is a one-cycle shader, it could run at 300M fr/s if memory bandwidth were to be ignored. But according to memory bandwidth limitations, the effective throughput is about half that (169M fr/s). Thus, the memory bandwidth limitations in this case could cause a one-cycle shader to have about the same effective throughput as a two-cycle shader (which would be GPU-limited at 150M fr/s).
The performance can be improved a lot in similar cases by changing texture size, format or filtering settings. For example, switching to DXT1 (4 bits/texel) leaves:
Tex: \[
(150^2 \text{ tx}) * (0.5 \text{ B/tx}) / (100^2 \text{ fr}) = 1.125 \text{ B/fr}
\]
\[
(1.5 \text{ GB/s}) / (4 \text{ B/fr} + 1.125 \text{ B/fr}) = 314 \text{ M fr/s}
\]
Which changes the equation/balance considerably – the fragment rate crosses over to not being memory bound. In addition, keep in mind that this assumed 150x150 textures with 100x100 quads. If we increase the quad size, the amortized B/fr from texture memory reads falls, assuming caching.
**Other consumers of memory bandwidth**
Tegra has a unified memory; other modules compete with the 3D engine in order to access the memory.
The display engine can consume significant bandwidth when it is continuously reading the back-buffer in order to refresh the display. For example in 720P, with 60Hz native display refresh rate and RGBA8888 surface format the display engine has to transfer:
\[
1280 \times 720 \times 60 \times 4B = 211 \text{ MB/s}
\]
In many cases, if the application window is not full-screen then we cannot just flip the front and back buffers to swap them. We have to blit the GL backbuffer surface to a surface owned by the window manager or OS. This operation requires reading and writing 2 or 4 bytes per pixel!
Swapping buffers to render a 30FPS animation in 720P window at 32 bits consumes equivalent additional bandwidth to the previous example:
\[
1280 \times 720 \times 30 \times (4B + 4B) = 211 \text{ MB/s}
\]
Put together, display plus blitting in a compositing OS has an overhead of ~422 MB/s.
**Combined Example 1: High-quality Composed/Blended UI**
If we assume the following:
- A tablet screen with 1024x600 resolution at 32bpp
- A scan rate of 60fps
- Assume a 3D-composed OS (i.e. the final rendering buffer is composited to the screen using texturing)
- A rendering and compositing rate of 30 fps in the 3D app
If we consider a tablet screen with 1024x600 resolution at 32bpp, a scan rate of 60fps and assume a window manager, then memory bandwidth for a 30fps app for scanout and compositing is:
\[
1024 \times 600 \times (60 \times 4B + 30 \times (4B + 4B)) = 281 \text{ MB/s}
\]
We’ll add into this a very expensive case of rendering: rendering each 3D pass with a full-screen-sized (1024x600) texture and alpha blending. Then each 3D fragment rendered needs to
read the frame buffer (4B), read the texture (4B, since we load one texel per fragment, caching can be ignored) and write the framebuffer (4B). That’s a total of 12B of memory bandwidth per rendered fragment. So the memory cost of N frames of overdraw per frame at 30fps is:
\[ 1024 \times 600 \times 12B \times N \times 30 = 211 \times N \text{ MB/s} \]
So, with a budget of 1.5GB/s, the allowable overdraw would be about:
6 fragments per pixel \( \approx 211 \text{ MB/s} \times 6 + 281 \text{ MB/s} \approx 1.4 \text{ GB/s} \)
if we consider only memory bandwidth. But as seen above, the GPU-clock limit numbers for basic level fragment shaders are greater than this, so memory bandwidth can definitely be the limitation.
To determine specifically, the max number of cycles per fragment given 6 fragments per pixel would be
\[ 300 \text{MHz} / (1024 \times 600 \times 30 \times 6) = \approx 2.7 \text{ cycle shader} \]
So the “break-even” point here appears to be about 2.7 cycles per fragment for the shader code.
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, "MATERIALS") ARE BEING PROVIDED "AS IS." NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication or otherwise under any patent or patent rights of NVIDIA Corporation.
Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA Corporation products are not authorized for use as critical components in life support devices or systems without express written approval of NVIDIA Corporation.
Trademarks
NVIDIA, the NVIDIA logo, Tegra, GeForce, NVIDIA Quadro, and NVIDIA CUDA are trademarks or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.
Copyright
© 2008-2010 NVIDIA Corporation. All rights reserved.
|
{"Source-Url": "https://docs.nvidia.com/drive/archive/4.1L/nvvib_docs/NVIDIA%20Vibrante%20Linux%20DPX%20Development%20Guide/baggage/tegra_gles2_performance.pdf", "len_cl100k_base": 10364, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 57456, "total-output-tokens": 11572, "length": "2e13", "weborganizer": {"__label__adult": 0.0008673667907714844, "__label__art_design": 0.003143310546875, "__label__crime_law": 0.0007634162902832031, "__label__education_jobs": 0.0005826950073242188, "__label__entertainment": 0.00034809112548828125, "__label__fashion_beauty": 0.0005178451538085938, "__label__finance_business": 0.00045013427734375, "__label__food_dining": 0.0005030632019042969, "__label__games": 0.00695037841796875, "__label__hardware": 0.05712890625, "__label__health": 0.0005807876586914062, "__label__history": 0.0007491111755371094, "__label__home_hobbies": 0.0003039836883544922, "__label__industrial": 0.001819610595703125, "__label__literature": 0.0004763603210449219, "__label__politics": 0.00041413307189941406, "__label__religion": 0.0013475418090820312, "__label__science_tech": 0.205322265625, "__label__social_life": 7.146596908569336e-05, "__label__software": 0.03155517578125, "__label__software_dev": 0.68408203125, "__label__sports_fitness": 0.000736236572265625, "__label__transportation": 0.0009889602661132812, "__label__travel": 0.0003952980041503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46035, 0.02143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46035, 0.57188]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46035, 0.8321]], "google_gemma-3-12b-it_contains_pii": [[0, 71, false], [71, 3160, null], [3160, 4881, null], [4881, 6096, null], [6096, 8291, null], [8291, 10084, null], [10084, 12379, null], [12379, 14853, null], [14853, 17013, null], [17013, 19303, null], [19303, 21599, null], [21599, 23851, null], [23851, 25402, null], [25402, 27021, null], [27021, 28847, null], [28847, 31087, null], [31087, 33353, null], [33353, 35835, null], [35835, 37444, null], [37444, 39304, null], [39304, 41424, null], [41424, 43537, null], [43537, 44562, null], [44562, 46035, null]], "google_gemma-3-12b-it_is_public_document": [[0, 71, true], [71, 3160, null], [3160, 4881, null], [4881, 6096, null], [6096, 8291, null], [8291, 10084, null], [10084, 12379, null], [12379, 14853, null], [14853, 17013, null], [17013, 19303, null], [19303, 21599, null], [21599, 23851, null], [23851, 25402, null], [25402, 27021, null], [27021, 28847, null], [28847, 31087, null], [31087, 33353, null], [33353, 35835, null], [35835, 37444, null], [37444, 39304, null], [39304, 41424, null], [41424, 43537, null], [43537, 44562, null], [44562, 46035, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46035, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46035, null]], "pdf_page_numbers": [[0, 71, 1], [71, 3160, 2], [3160, 4881, 3], [4881, 6096, 4], [6096, 8291, 5], [8291, 10084, 6], [10084, 12379, 7], [12379, 14853, 8], [14853, 17013, 9], [17013, 19303, 10], [19303, 21599, 11], [21599, 23851, 12], [23851, 25402, 13], [25402, 27021, 14], [27021, 28847, 15], [28847, 31087, 16], [31087, 33353, 17], [33353, 35835, 18], [35835, 37444, 19], [37444, 39304, 20], [39304, 41424, 21], [41424, 43537, 22], [43537, 44562, 23], [44562, 46035, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46035, 0.04545]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
3a94e5aece883cab5245efc27b9c29e29b16c284
|
A Practical Unification of Multi-stage Programming and Macros
Nicolas Stucki
EPFL
Switzerland
nicolas.stucki@epfl.ch
Aggelos Biboudis
EPFL
Switzerland
aggelos.biboudis@epfl.ch
Martin Odersky
EPFL
Switzerland
martin.odersky@epfl.ch
Abstract
Program generation is indispensable. We propose a novel unification of two existing metaprogramming techniques: multi-stage programming and hygienic generative macros. The former supports runtime code generation and execution in a type-safe manner while the latter offers compile-time code generation.
In this work we draw upon a long line of research on metaprogramming, starting with Lisp, MetaML and MetaOCaml. We provide direct support for quotes, splices and top-level splices, all regulated uniformly by a level-counting Phase Consistency Principle. Our design enables the construction and combination of code values for both expressions and types. Moreover, code generation can happen either at runtime à la MetaML or at compile time, in a macro fashion, à la MacroML.
We provide an implementation of our design in Scala and we present two case studies. The first implements the Hidden Markov Model, Shonan Challenge for HPC. The second implements the staged streaming library Strymonas.
1 Introduction
Generative programming [9] is widely used in scenarios such as code configuration of libraries, code optimizations [43] and DSL implementations [8, 41]. There are various kinds of program generation systems ranging from completely syntax-based and unhygienic, to fully typed [35]. Modern macro systems, like Racket’s, can extend the syntax of the language [11]. On the flipside, other program generation systems may provide a fixed set of constructs offering staged evaluation [10, 16] like MetaML [38] and MetaOCaml [6, 20, 21, 23].
The latter techniques established a new programming paradigm, called Multi-stage Programming (MSP) offering a principled, well-scoped and type-safe approach to code generation [37]. Programmers make use of two constructs, quote and splice, to delay and compose representations of expressions. Conceptually, users are able to manually indicate which parts of their program are dynamic and which static. Even though this technique is inspired by advancements in partial evaluation [26] it proved useful to have it in a programming language with first-class support. Part of the power of this programming model, comes from a regulation mechanism that attributes levels to terms [36]; these systems are type-safe in a modular way (type checking the generator ensures the validity of the generated code). Nowadays, gaining inspiration from MetaML and MetaOCaml, many programming languages provide support for similar mechanisms such as F#, Haskell (Template Haskell [33] and later Typed Template Haskell [15]), Converge [42] and others. While MSP is primarily a metaprogramming technique for runtime code generation it has been shown that its semantics can specify compile-time metaprogramming as well.
MacroML [12] showed that the treatment of staged evaluation can form the basis for generative macros (i.e. macros that cannot inspect code) or more precisely, function inlining. Theoretically it has been proven that MacroML’s interpretation is a denotational semantics where MetaML is the internal language of the model. Monnier et al. [25] first expressed inlining as staged computation but MacroML offered a user-level perspective by reusing the same mechanisms of quotes and splices; where splices can appear at the top-level (not nested in a quote). Modular Macros [44] prototyped a compile-time variant of MetaOCaml which also comprises part of our inspiration.
def power_s(x: Expr[Double], n: Int): Expr[Double] =
if (n == 0) '(1.0)
else if (n % 2 == 1) '(x * ~power_s(x, n - 1))
else '(val y = ~x * ~x; ~power_s('y, n / 2))
inline def power(x: Double, inline n: Int): Double =
~power_s('('x), n)
val x = 2
// 1) staged, runtime generation
val power5 = ('(x: Double) => ~power_s('('x), 5)).run
power5(x)
// 2) macro, compile-time generation
power(x, 5)
// Both generate: { val y = x * x; val y2 = y * y; x * y2 }
Figure 1. Power function, staged or inlined
While the same line of work inspired many metaprogramming libraries and language features, to our knowledge built-in support for both run-time MSP and generative macros has not been implemented previously in a unifying manner. We advocate that such a unification has a two-fold benefit: 1) users rely on a single abstraction to express code generation and 2) having a single subsystem in the compiler favors maintainability. Our view regarding top-level splices is on par with the benefits of MSP on domain-specific optimizations [7, 21, 22]: in modern programming languages, inlining (à la C++) with a sufficiently smart partial evaluator is not necessarily equivalent with domain-specific optimizations that can be done at compile-time.
In our work a staged library can be used, unaltered, either as a macro or a run-time code generator. We illustrate staging and macros via the folklore example of a simple power function, which has been used for demonstrating partial evaluation techniques. The `power_s`, staged function is defined recursively using the basic method of exponentiation by squaring. The inline function `power` becomes a macro by expanding `power_s`. In Figure 1 we see two different ways to use it: 1) staged; generation happens at runtime and 2) inlined generation happens at compile-time.
Contributions In this paper, inspired from MetaML and MacroML we present a practical implementation of homogeneous generative metaprogramming (HGMP) for Scala:
- We present a design with quotes, splices, and top-level splices to support both MSP and macros simultaneously.
- We extend the operation of splicing to handle terms and types uniformly.
- We present how our system operates under a MetaML-inspired check, Phase Consistency Principle (PCP), that regulates free variable accesses in quoted and spliced expressions and types uniformly, for both MSP and macros.
Scala is a multi-paradigm programming language for the JVM offering a metaprogramming API called scala.reflect [5]. scala.reflect supports type-aware, runtime and compile-time code generation providing an expressive and powerful system to the user (both generative and analytical). Besides the success of scala.reflect, the API exposed compiler internals and gave rise to portability problems between compiler versions [24]. We implemented our system for the Dotty compiler for Scala and we believe that the design is portable in other languages as well.
Organization First, in Section 2, we introduce a motivating example to explain the high-level semantics of quotes and splices. In Section 3 we present PCP and the details of multi-staging and macros. In Section 4 we discuss how to implement cross-stage persistence (CSP) in this system. In Section 5 we show how to simplify the handling of type splices in quoted code. In Section 6 we discuss lifted lambdas and β-reduction optimizations. Section 7 describes the implementation in Dotty. Section 8 presents two case studies 1: (i) we give a sample solution to the Hidden Markov Model challenge as specified in Shonan Challenge for Generative Programming [1] and (ii) we port Strymonas [22], a staged library for streams. In Section 9 we discuss the related work and conclude in Section 10.
2 Overview of Quotes and Splices
Our metaprogramming system is built on two well-known fundamental operations: quotation2 and splicing. A quotation is expressed as ‘(...)’ or ‘[…]' for expressions (both forms are equivalent) and as ‘[...]’ for types. Splicing is expressed with the ~ prefix operator.
If e is an expression, then ‘(e)’ or ‘{e}’ represent the opaque typed abstract syntax tree representing e. If T is a type, then ‘[T]’ represents the opaque type structure representing T. The precise definitions of typed abstract syntax tree or type structure do not matter for now, the expressions are used only to give some intuition that they represent code as a value. Conversely, ~e evaluates the expression e, which must yield a typed abstract syntax tree or type structure, and embeds the result as an expression (respectively, type) in the enclosing program. Informally, by quoting we delay the evaluation of an expression—or we stage, in MSP terms—and by splicing, we evaluate an expression before embedding the result in the surrounding quote.
Quotes and splices are duals of each other. For arbitrary expressions e: T and types τ we have ‘(~e) = e and ‘~[T] = T; for arbitrary AST-typed expressions e2: Expr[T] and t: Type[τ] we have ‘(~e) = e and ‘(~t) = t.
1The code of the case studies, along with unit tests and benchmarks are at https://github.com/nicolasstucki/dotty-staging-gpce-2018
2Or more accurately quasiquote, which represents quotes with unquoted expressions getting evaluated first.
Quoted code values can have the following two types:
- \texttt{Expr[T]}: typed abstract syntax trees representing an expression of type \texttt{T}.
- \texttt{Type[T]}: type structures representing a type \texttt{T}.
Quoting can be seen as the function that takes expressions of type \texttt{T} to expressions of type \texttt{Expr[T]} and a type \texttt{T} to an expression of type \texttt{Type[T]}. Splicing takes expressions of type \texttt{Expr[T]} to expressions of type \texttt{T} and an expression of type \texttt{Type[T]} to a type \texttt{T}. For example, the code below presents unrolled, a recursive function which generates code that will explicitly perform the given operation for each element of a known list. The elements of the list are expressions themselves and the function maps expressions of integers to expressions of \texttt{Unit} (or statements). We use quotes to delay the representation of the return value and result the split value of the evaluation of \texttt{f(head)} and \texttt{unrolled(tail, f)}.
```scala
def unrolled(list: List[Expr[Int]], f: Expr[Int] => Expr[T])(implicit t: Type[T]): Expr[Some[T]] = {
case head :: tail => '{ ~f(head); ~unrolled(tail, f) }
case Nil => '{()}
}
```
Similarly, it is also possible to splice types in the quoted code giving us the capability of creating expressions of types not known at compile time. In the example below \texttt{x} has type \texttt{Expr[T]} but we require \texttt{T} itself to be unknown.
```scala
def staged[T](arr: Expr[Array[T]], f: Expr[Int] => Expr[Unit])(implicit t: Type[T]): Expr[Unit] = '{
var i: Int = 0
while (i < (~arr).length) {
val element: ~t = (~arr)(i)
~f('element)
i += 1
}
~println('i))
// Generates: '{ println('i))
```
In this section we showed how to unroll a loop for a known sequence of staged expressions. However, we have deliberately not yet discussed whether code generation happens at compile-time or run-time.
### 3 Unifying Multi-stage Programming and Macros
This section introduces our \textit{Phase Consistency Principle} (PCP) and how we employ it to check that the staged code is consistent. Then, we will see how quotes and splices are used in multi-stage programming and macros alike.
To start, let us adapt the requirements of our unrolled example and instead of unrolling a loop for a known sequence of staged expressions we want to stage a loop for an unknown sequence. The following example shows what happens when we start nesting quotes, in splices, in quotes. \texttt{~f('element)} is inside a quote, which means that the expression will generate some code that will be spliced in-place. Inside it we refer to \texttt{'(element)}, which is defined in the outer quote. Additionally, we make this version generic on \texttt{T} with \texttt{Type[T]}, which is spliced in the type of \texttt{val element: ~t}.
```scala
def some[T](x: Expr[T], t: Type[T]): Expr[Some[T]] = '{
Some[~t](~x)
}
def someType[T](t: Type[T]): Type[Some[T]] = '{
Some[~t]
}
```
In this section we showed how to unroll a loop for a known sequence of staged expressions. However, we have deliberately not yet discussed whether code generation happens at compile-time or run-time.
### 3.1 Phase Consistency Principle
A fundamental \textit{phase consistency principle} (PCP) regulates accesses to free variables in quoted and spliced code:
- For any free variable reference \texttt{x}, the number of quoted scopes and the number of spliced scopes between the reference to \texttt{x} and the definition of \texttt{x} must be equal.
Here, the self-reference to an object (\texttt{this}) counts as free variables. On the other hand, we assume that all imports are fully expanded and that \texttt{_root_.scala.Int} is not a free variable. So references to global definitions are allowed everywhere.
For example, in \texttt{staged}, \texttt{element} is consistent because there is one \texttt{~} and one \texttt{'} between the definition and its use. The same is true for \texttt{arr} and \texttt{t} even though there is a \texttt{'} first and then a \texttt{~}. The type \texttt{Int} of \texttt{var i: Int} is consistent as it is expanded to \texttt{_root_.scala.Int}, thus not considered a free variable. Primitive language operation such as \texttt{+=} in \texttt{i += 1} are also globally identifiable and hence not free variables. The variable \texttt{i} is consistent because it is only used locally in the \texttt{'}, i.e. it is not a free variable of any other quote or splice.
The phase consistency principle can be motivated as follows: first, suppose the result of a program \texttt{P} is some quoted code \texttt{'( ... x ... )} that refers to a free variable \texttt{x} in \texttt{P}. This can be represented only by referring to the original variable \texttt{x}. Hence, the result of the program will need to persist the program state itself as one of its parts. This operation should not be considered positive in general as different stages might be run on different machines, as macros do. Hence this situation should be made illegal. Dually, suppose a top-level part of
a program is a spliced code \(~(\ldots \ x \ \ldots)\) that refers to a free variable \(x\) in \(P\). This would mean that we refer during \textit{construction} of \(P\) to a value that is available only during \textit{execution} of \(P\). This is of course impossible and therefore needs to be ruled out. Now, the small-step evaluation of a program will reduce quotes and splices in equal measures using cancellation rules which informally state that: \(~(e) \Rightarrow e, \ (\sim e) \Rightarrow e, \ ~[T] \Rightarrow T\) and \([T] \Rightarrow T\). However, the evaluation will neither create nor remove quotes (or splices) individually. So PCP ensures that the program elaboration will lead to neither of the two unwanted situations described above.
In what concerns the range of features it covers, principled metaprogramming is quite close to the MetaML family of languages. One difference is that MetaML does not have an equivalent of the PCP - quoted code in MetaML can access variables in its immediately enclosing environment, with some restrictions and caveats since such accesses involve serialization. However, this does not constitute a fundamental gain in expressiveness. Quotes and splices allow CSP by lifting which enable the implementation of such accesses within the confines of the PCP. This is further explained in section 4.1.
### 3.2 Supporting Multi-stage Programming
As discussed so far, the system allows code to be staged, i.e. to be prepared to be executed at a later stage. To be able to consume the staged code, \(\text{Expr}[\text{T}]\) does not only provide the \(\sim\) prefix method, it also provides \(\text{run}\) that evaluates the code and returns a value of type \(\text{T}\). Note that \(\sim\) and \(\text{run}\) both map from \(\text{Expr}[\text{T}]\) to \(\text{T}\) but only \(\sim\) is subject to the PCP, whereas \(\text{run}\) is just a normal method. We also provide a show method to display the code in String form.
```scala
def sumCodeFor(arr: Expr[Array[Int]]): Expr[Int] = {
var sum = 0
~staged(arr, x => ~(sum += ~x))
sum
}
val sum: Array[Int] => Int = sumCode.run
```
#### Limitations to Splicing
Quotes and splices are duals as far as the PCP is concerned. But there is an additional restriction that needs to be imposed on splices to guarantee soundness: \textit{code in splices must be free of scope extrusions}, which we guarantee by disallowing effects. The restriction prevents code like this:
```scala
var x: Expr[T] = _
'( (y: T) => ~(x = 'y; 1) )
```
This code, if it was accepted, would \textit{extrude} a reference to a quoted variable \(y\) from its scope. This means we subsequently access a variable outside the scope where it is defined, which is problematic. The code is clearly phase consistent, so we cannot use PCP to rule it out. Instead, we postulate a future effect system that can guarantee that splices are pure. In the absence of such a system we simply demand that spliced expressions are pure by convention, and allow for undefined compiler behavior if they are not.
A second limitation comes from the use of the method \(\text{run}\) in splices. Consider the following expression:
```scala`
'( (x: Int) => ~(x; 1) )
```
This is again phase correct but will lead us into trouble. Indeed, evaluating the \(\text{run}\) will reduce the expression \(\sim(x)\).run to \(x\). But then the result
```scala`
'( (x: Int) => ~(x; 1) )
```
is no longer phase correct. To prevent this soundness hole it seems easiest to classify \(\text{run}\) as a side-effecting operation. It would thus be prevented from appearing in splices. In a base language with side-effects we’d have to do this anyway: Since \(\text{run}\) runs arbitrary code it can always produce a side effect if the code it runs produces one.
### 3.3 Supporting Macros
Seen by itself, quotes and splices-based metaprogramming looks more like a system for staging than one supporting macros. But combined with Dotty’s \texttt{inline}\(^3\) it can be used as a compile-time metaprogramming system as well. Effectively executing the staging at compile-time and generating the full program with no overhead at run-time.
#### Inline
In Dotty the \texttt{inline} keyword can be added to a \texttt{val}, \texttt{def} or a parameter to a \texttt{inline} \texttt{def}. A definition marked as \texttt{inline} will be inlined when the code is typed checked. Informally speaking, a \texttt{val} and a parameter marked as such, will be inlined only if they are a constant or an inlined constant of primitive value type (\texttt{Boolean}, \texttt{Byte}, \texttt{Short}, \texttt{Int}, \texttt{Long}, \texttt{Float}, \texttt{Double}, \texttt{Char} or \texttt{String}). Other values are disallowed to avoid
```scala
3Dotty’s \texttt{inline} keyword guarantees inlining and inlines the code at type-checking time. [40]
```
moving any side effects and changing the semantics of the program.
Function definitions are always inlined in a semantic preserving way as they are in essence $\beta$-reductions. Parameters have call by value (CBV) semantics, hence they are evaluated before the invocation to the function and bound to local vals. If the parameters are marked as call by name (CBN) (which is realized by prefixing the type with $=>$) then the argument is directly inlined in each reference to the parameter. Inline parameters are inlined in the resulting code and guaranteed to be a constant value.
**Macro** In combination with inline, macro elaboration can be understood as a combination of a staging library and a quoted program. An inline function, such as Macros.sum that contains a splice operation outside an enclosing quote, is called a macro. Macros are supposed to be expanded in a subsequent phase, i.e. in a quoted context. Therefore, they are also type checked as if they were in a quoted context. For instance, the definition of sum is type-checked as if it appeared inside quotes. This makes the call from sum to sumCodeFor phase-correct, even if we assume that both definitions are local.
```scala
object Macros {
inline def sum(arr: Array[Int]): Int = ~sumCodeFor(arr)
def sumCodeFor(arr: Expr[Array[Int]]): Expr[Int] = ...
}
```
On the other side we will have an App that will use the sum function.
```scala
object App {
val array = Array(1, 2, 3)
Macros.sum(array)
}
```
When this program is compiled it can be thought of as a quoted program that is being staged. Inlining the sum function would give the following phase correct App:
```scala
object App {
val array = Array(1, 2, 3)
~Macros.sumCodeFor(array)
}
```
Phase correctness is unchanged for Macros and array, inlining preserves PCP. But now we have a top-level splice in the App, which is not an issue as we assumed that App is a quoted program being staged. The next step is to evaluate sumCodeFor(array) and place the result in the splice.
```scala
object App {
val array = Array(1, 2, 3)
~("{ var sum = 0; ...; sum }\) // or ~(var sum = 0; ...; sum) by cancelation rule
}
```
The second role of inline in a macro is to make constants available in all stages. To illustrate this, consider the sumN function that makes use of a statically known size:
```scala
object Macros {
inline def sumN(size: Int, arr: Array[Int]): Int = ~sumN_m(size, arr)
def sumN_m(size: Int, arr: Expr[Array[Int]]): Expr[Int] = ...
}
```
The reference to size as an argument in sumN_m(size, arr) seems not phase-consistent, since size appears in a splice without an enclosing quote. Normally that would be a problem because it means that we need the value of size at compile-time, which is not available in general. But since size is an inline parameter, we know that at the macro expansion point size will be a known constant value. To reflect this, we will assume that all inline values are not free variables as they will be known after inlining:
- If x is an inline value (or an inline parameter of an inline function) it is not a free variable of the quote or splice.
Additionally we may also have macros with type parameters that are used inside a top-level splice. For example, the type parameter T used in the macro in the following version of foreach exemplifies this.
```scala
object Macros {
inline def foreach[T](arr: Array[T], f: T => Unit): Unit = ...
}
```
```scala
object App {
val array = Array(1, 2, 3)
Macros.foreach(array, { x => println(x) })
}
```
When inlined the type T will become a known type, this implies that macro type parameters can have the same treatment as inline parameters.
- If T is a type parameter of an inline function, then T is not a free variable of the quote or splice.
**Avoiding an Interpreter** Providing an interpreter for the full language is quite difficult, and it is even more difficult to make that interpreter run efficiently. To avoid needing a full interpreter, we can impose the following restrictions on the use of splices to simplify the evaluation of the code in top-level splices.
1. A top-level splice must appear in an inline function (turning that function into a macro).
2. Splices directly inside splices are not allowed.
3. A macro is effectively final and it may not override other methods.
4. Macros are consumed in by other modules/libraries.
These restrictions allow us to stage and compile (at macro compilation time) the code that would be interpreted at macro expansion time, which entails that the macro will be expanded using compiled code. Which is faster and does not require the implementation of an AST interpreter for the full language.
4 Cross-stage Persistence
Cross-Stage Persistence refers to persisting some value or type, available in the current stage, for use in a future stage. We support persisting base types, ADT encodings (classes) and abstract types by copying using Liftable. Fully qualified names (terms or types) are always shared. Finally, polymorphic (e.g., def lift[T](x: T) = 'x) are not supported directly unless written as def lift[T: Liftable](x: T) = x.toExpr.
4.1 Lifting Expressions
Consider the implementation of sumN_m used in the previous macro:
```scala
def sumN_m(size: Int, arr: Expr[Array[Int]]): Expr[Int] = '{
assert((~arr).length == ~size.toExpr)
var sum = 0
~unrolled(List.tabulate(size)(_.toExpr),-
x => '(sum += (~arr)(~x))
sum
}
```
The assertion, assert((~arr).length == ~size.toExpr), looks suspicious. The parameter size is declared as an Int, yet it is converted to an Expr[Int] with toExpr. Shouldn’t size be quoted? In fact, this would not work since replacing size by '(size) in the clause would not be phase correct.
What happens instead is an extension method toExpr is added. The expression size.toExpr is then expanded to the equivalent of:
```scala
implicitly[Liftable[Int]].toExpr(size)
```
The extension method says that values of types implementing the Liftable type class can be lifted (serialized) to Expr values using toExpr when scala.quoted._ is imported. We provide instance definitions of Liftable for several types including Boolean, String, and all primitive number types. For example, Int values can be converted to Expr[Int] values by wrapping the value in a literal tree node. This makes use of the underlying tree representation in the compiler for efficiency. But the Liftable instances are nevertheless not magic in the sense that they could all be defined in a user program without knowing anything about the representation of Expr trees. For instance, here is a possible instance of Liftable[Boolean]:
```scala
implicit def ListIsLiftable[T: Liftable: Type]: Liftable[List[T]] = new {
def toExpr(xs: List[T]): Expr[List[T]] = xs match {
case Nil => '(Nil: List[T])
case _ => '(~toExpr(xs))
}
}
```
Since Liftable is a type class, its instances can be conditional. For example, a List is liftable if its element is:
```scala
implicit def ListIsLiftable[T: Liftable: Type]: Liftable[List[T]] = new {
def toExpr(xs: List[T]): Expr[List[T]] = xs match {
case Nil => '(Nil: List[T])
case _ => '(~toExpr(xs))
}
}
```
In the end, Liftable resembles very much a serialization framework. Like the latter, it can be derived systematically for all collections, case classes and enums.
4.2 Implicitly Lifted Types
The metaprogramming system has to be able to take a type T and convert it to a type structure of type Type[T] that can be spliced. This means that all free variables of the type T refer to types and values defined in the current stage.
For a reference to a global class, this is easy, just issue the fully qualified name of the class. Members of reifiable types are handled by just reifying the containing type together with the member name. But what to do about references to type parameters or local type definitions that are not defined in the current stage? Here, we cannot construct the type[T] tree directly, so we need to get it from a possibly recursive implicit search. For instance, to provide implicitly[Type[List[T]]], the lifted type Type[List[T]] required by ListIsLiftable where T is not defined in the current stage. We construct the type constructor of List applied to the splice of the result of searching for an implicit Type[T], which is equivalent to '[List[-implicitly[Type[T]]]].
5 Healing Phase of Types
To avoid clutter, the compiler tries to heal a phase-incorrect reference to a type to a spliced lifted type, by rewriting T to -implicitly[Type[T]]. For instance, the user-level definition of staged would be rewritten, replacing the reference to T with -implicitly[Type[T]]. The implicitly query succeeds because there is an implicit value of type Type[T] available (namely the evidence parameter corresponding to the context bound Type⁴), and the reference to that value is phase-correct.
⁴The notation T: Type is called a context bound and it is a shorthand for the (implicit t: Type[T]) parameter in the original signature.
If that was not the case, the phase inconsistency for \( T \) would be reported as an error.
```scala
def staged[T: Type](arr: Expr[Array[T]], f: Expr[T] => Expr[Unit]): Expr[Unit] = '{
var i = 0
while (i < (~arr)(i)) {
val element: T = (~arr)(i)
~f('~element)
i += 1
}
}
```
6 Staged Lambdas
When staging programs in a functional language there are two unavoidable abstractions: staged lambda \( \text{Expr}[T => U] \) and staging lambda \( \text{Expr}[T] => \text{Expr}[U] \). The former is a function that will exist in the next stage whereas the second one is a function that exists in the current stage.
Below we show an instance where these two do not match. The \( (f) \) has type \( \text{Expr}[\text{Int} => \text{Unit}] \) and staged expects an \( \text{Expr}[\text{Int}] => \text{Expr}[\text{Unit}] \). In general we it is practical to have a mechanism to go from \( \text{Expr}[T => U] \) to \( \text{Expr}[T] => \text{Expr}[U] \) and vice versa (as described in [38]).
```scala
inline def foreach(arr: Array[Int], f: Int => Unit): Unit =
~foreach(arr), x => (f)(x))
```
We provide a conversion from \( \text{Expr}[T => U] \) to \( \text{Expr}[U] \) with the decorator AsFunction. This decorator gives \( \text{Expr} \) the apply operation of an applicative functor, where \( \text{Exprs \over function types} \) can be applied to \( \text{Exprs \over arguments} \). The definition of \( \text{AsFunction}(f).\text{apply}(x) \) is assumed to be functionally the same as \( ((f)(x)) \), however it optimizes this call by returning the result of beta-reducing \( f(x) \) if \( f \) is a known lambda expression.
The \( \text{AsFunction} \) decorator distributes applications of \( \text{Expr} \) over function arrows:
\[
\text{AsFunction}(\_).\text{apply}: \text{Expr}[T => U] => (\text{Expr}[T] => \text{Expr}[U])
\]
We can use the conversion in our previous foreach example as follows
```scala
~foreach("\( arr \)", x => (\( f \))(x))
```
Its dual, let’s call it \( \text{reflect} \), can be defined in user space as follows:
```scala
def reflect[T: Type, U: Type](f: Expr[T] => Expr[U]):
Expr[T => U] = '{ (x: T) => ~f('~x))
```
7 Implementation
The described metaprogramming system is implemented in the Dotty compiler [39] directly, however it can be ported to other ecosystems as well. The necessary ingredients to port the design in other ecosystems are the following:
- A typed and lexically-scoped language.
- Syntax support for quotes and splices.
- Support for the serialization of typed code.
- Support for separate compilation or the use of an existing interpreter (for macros).
7.1 Syntax changes
A splice \( ~e \) on an expression of type \( \text{Expr}[T] \) is a normal prefix operator such as \( \text{def } \text{unary}_\sim \). To make it work as a type operator on \( \text{Type}[T] \) as well, we need a syntax change that introduces prefix operators as types. With this addition, we can implement the type splice with \( \text{Type } \text{unary}_\sim \)
```scala
sealed abstract class Expr[T] {
def unary_~: T
}
sealed abstract class Type[T] {
type unary_~ = T
}
```
Quotes are supported by introducing new tokens ", " and "\" and adding quoted variants "\( \)\", "\( \)\" and "\( [ \)\"]\" to the valid expressions.
7.2 Implementation in Dotty
Quotes and splices are primitive forms in the generated typed abstract syntax trees. They are eliminated in an expansion phase after type checking and before starting the transformation of the trees to bytecode. This phase checks that the PCP holds, pickles contents of the quotes and expands top-level splices inserted by macros. All of these can be performed at the same time.
**PCP check** To check phase consistency we traverse the tree top-down remembering the context stage. Each local definition in scope is recorded with its level and each reference to a definition is checked against the current stage.
```scala
// stage 0
'( // stage 1
val x = ... // stage 1 with (x -> 1)
~( // stage 0 (x -> 1)
val y = ... // stage 0 with (x -> 1, y -> 0)
x // error: defined at level 1 but used in stage 0
)
// stage 1 (x -> 1)
x // x is ok
)
```
\(^3\)Without the \( \beta \)-reduction requirement it is possible to implement in user code.
Pickling quotes If the outermost scope is a quote, we need to pickle \cite{19} the contents of the quote to have it available at run-time. We implement this by pickling the tree as TASTY \cite{27} binary, which is stored in a compacted string.
TASTY is the compact typed abstract syntax tree serialization format of Dotty. It usually pickles the full code after type checking and keeps it along the generated classfiles. This is used for separate and incremental compilation, documentation generation, language server for IDE, code decompilation and now quotes.
It is not possible to pickle the tree inside the quote directly as the contents of embedded splices are at stage 0 and may contain free variables. To avoid this we introduce holes in the trees that will be pickled in their place, each splice in the quote will have a hole that replaces it. Holes are encoded as a list of functions fillHole, each function contains the code that will be used to fill the $i$th hole. Each hole will have an argument list, listing variables defined in the quote and referenced inside the splice. These arguments (e.g., \texttt{/'quotesingle.Var} (x) in the code below) will be quoted to retain phase consistency.
\begin{verbatim}
/'quotesingle.Var
{ val x: Int = ... ~{ ... '/'quotesingle.Var{ ... x ... } ... }
} // Is transformed to
{ val x: Int = ...
~fillHole(0).apply('x)
}
\end{verbatim}
The contents of the splices will be used to construct each element of the hole. Each element is a lambda that receives the quoted arguments and will return the evaluation of the code in the splice. The lambda will receive as parameters the quotes that were passed as arguments in the previous transformation. The quoted parameters need to be spliced in the body of the splice to keep phase consistency\footnote{Note that $x$ must be inside some quote to be phase consistent in the first place.}.
\begin{verbatim}
/~{ ... '/'quotesingle.Var{ ... x ... } ... }
// Is transformed to
(x: Expr[Array[Int]]) => { ... '/'quotesingle.Var( x => '(...sum += ~x))
sum }
\end{verbatim}
Once we applied the first transformation to the quoted code we can pickle it and keep the contents of the splices in a separate structure. We use stagedQuote to put together the parts of the quotes in some data structure. As an example consider the following quote:
\begin{verbatim}
val arr: Expr[Array[Int]] = ...
~/'
{ var sum = 0 ~staged(arr, x => '(...sum += ~x))
sum }
\end{verbatim}
Which will be transformed to the following code:
\begin{verbatim}
val arr: Expr[Array[Int]] = ...
stagedQuote(
tasty = """[I // PICKLED TASTY BINARY
var sum = 0 ~fillHole(0).apply(''(sum)
sum ]""",
fillHole = List(
(sum: Expr[Int]) => staged(arr, x => '((~sum) += ~x))
)
}
\end{verbatim}
After the presented transformation, the contents of fillHole will use the same transformation recursively to pickle the inner quote: \texttt{(('\texttt{(~sum) += ~x}).}
Compiling Macros To avoid the need for a complex interpreter when evaluating the code in top-level splices we use part of the pickling mechanism. For example in \texttt{sum} we do not wish to have to interpret staged(...) when inlining.
\begin{verbatim}
object Macros {
inline def sum(arr: Array[Int]): Int = {
var sum = 0
~staged(''(arr), x => ''(sum += ~x))
sum
}
}
\end{verbatim}
The body of the macro is treated as quoted code and the tree is split into its parts.
Parameters of the macro are treated as defined outside of the quote and need to be added in the hole parameters. Parameters that were marked as inline are passed directly as values and lifted if used in a quote. We will get a version of the body that will have a hole in place of the original contents of the splices. The new version of the body of \texttt{sum} simply replaces the old one.
\begin{verbatim}
inline def sum(arr: Array[Int]): Int = {
var sum = 0 ~sum_hole(0).apply(''(arr), ''(sum))
sum
}
\end{verbatim}
Like with the pickled quotes we also get the contents of the splices in the form of a list of lambdas \texttt{sum\_hole}. Which will be placed in a static method and compiled along with \texttt{sum}.
\begin{verbatim}
def sum_hole = List(
(arr: Expr[Array[Int]], sum: Expr[Int]) => staged(arr, x => '((~sum) += ~x))
)
\end{verbatim}
After this transformation, all top-level splices contain a tree with a call to a parameterless static method, a statically
known index and a list of quoted (or inline) arguments. The interpreter that handles the macro splice expansion only needs to be able to handle these trees.
**Unpickling quotes** To unpickle quotes we unpickle most of the tree as usual in TASTY. But, if we encounter a hole it is filled using the corresponding `fillHole` for it. The index of the hole determines which `fillHole` must be used and the arguments of the hole are passed to the `fillHole(ids)`.
For inlined macros it is slightly different, as the tree will already be inlined with holes. Then we just need to load via reflection the corresponding `fillHole` and expand it normally.
**Running a quote** When executing `Expr.run`, an instance of the compiler consumes the `Expr`. This is an instance of the normal Dotty compiler that is provided by a quoted `Toolbox`. It provides caching and thread safety over the accesses to the compiler. Multiple instances can be created if needed. In the `Toolbox`, the compiler will load the tree from its TASTY and place the contents of the tree in a method of a new class. This class is compiled to bytecode and executed.
## 8 Case Studies
We present two case studies. Firstly, we give a sample solution to the Shonan Challenge for Generative Programming [1]. This case study shows that our system captures the basic needs for abstraction and reusability of staged code. Secondly, we port Strymonas [22], a staged library for streams, showing that a more complex library can optimize pipelines either in a runtime or compile-time fashion, unaltered.
### 8.1 Case Study 1: Linear Algebra DSL
This case study presents a way to define a generic and composable Linear Algebra DSL that can be used on staged and non-staged code alike. We implemented the framework presented in [21] that provided optimizable matrix multiplication as part of the Shonan HMM challenge.
To simplify the presentation, in this section we will only show how to perform a vector dot product. We will present an implementation for vector dot product that can stage or unroll the operations, use statically known vectors or dynamically accessible ones, and work on any kind of elements. The same abstraction would be extended and composed for a matrix multiplication.
#### 8.1.1 Ring Arithmetic
First we have to see how it is possible to abstract over operations that are staged and ones that are not staged. For this we will simply define an interpreter interface for our operations, in this case it will be the mathematical ring including subtraction. Apart from the operation, the interface will also provide the zero and one values for those operations.
```scala
trait Ring[T] {
val zero: T
val one: T
val add: (x: T, y: T) => T
val sub: (x: T, y: T) => T
val mul: (x: T, y: T) => T
}
```
```scala
class RingInt extends Ring[Int] {
val zero = 0
val one = 1
val add = (x, y) => x + y
val sub = (x, y) => x - y
val mul = (x, y) => x * y
}
```
As shown for a `Ring[Int]` all operations are just interpreted. If we implement a `Ring[Expr[Int]]` all operations will be staged. In fact `RingIntExpr` is a small staged interpreter, it will be a compiler for operations on `Int`.
```scala
class RingIntExpr extends Ring[Expr[Int]] {
val zero = `(0)`
val one = `'(1)`
val add = (x, y) => `'(x + y)`
val sub = (x, y) => `'(x - y)`
val mul = (x, y) => `'(x * y)`
}
```
To implement rings on structured types such as a complex number we implement it generically based on a ring on its elements. This ring is used to perform all operations on the inner elements.
```scala
case class Complex[T](re: T, im: T)
class RingComplex[U](u: Ring[U]) extends Ring[Complex[U]] {
val zero = Complex(u.zero, u.zero)
val one = Complex(u.one, u.zero)
val add = (x, y) => Complex(u.add(x.re, y.re), u.add(x.im, y.im))
val sub = ...
val mul = ...
}
```
This implementation of `RingComplex` is polymorphic on the type of elements it contains. Hence it can be instantiated as `Complex[Int]` or `Complex[Expr[Int]]` by instantiating the rings with the complex ring with `RingInt` and `RingIntExpr` respectively. Using this composability, we can implement all possible combination of rings by only implementing the ring for each type twice (unstaged and staged).
#### 8.1.2 Vector Operations
Across this paper we have seen several implementations of a staged `foreach` operation that had a `while` loop or was unrolled. We will use a vector abstraction that abstracts both the element type and the index type. The reduce operation will be provided by the `VecOps[Idx, T]` interface.
case class Vec[Idx, T](size: Idx, get: Idx => T)
trait VecOps[Idx, T] {
val reduce: ((T, T) => T, T, Vec[Idx, T]) => T
}
Now we can implement a version of the operation that executes the operations (VecOps[Int, T]) and one that stages the operations (VecOps[Expr[Int], Expr[T]]).
class StaticVecOps[T] extends VecOps[Int, T] {
val reduce: ((T, T) => T, T, Vec[Int, T]) => T = {
var sum = zero
for (i <- 0 until vec.size)
sum = plus(sum, vec.get(i))
sum
}
}
class StagedVecOps[T: Type] extends VecOps[Expr[Int], Expr[T]] {
val reduce: ((Expr[T], Expr[T]) => Expr[T], Expr[T], Vec[Expr[Int], Expr[T]]) => Expr[T] = {
var sum = ~zero
for (i <- 0 until ~vec.size)
sum = ~plus(~sum, vec.get(~i))
sum
}
}
### 8.1.3 Linear Algebra DSL
Now we can implement our linear algebra DSL that will provide the dot product on vectors. We both abstract on the vector operation and the element ring operations. It will first create a vector multiplying the elements using the ring and then it will be reduced using the operations of the ring.
class Blas1[Idx, T](r: Ring[T], ops: VecOps[Idx, T]) {
def dot(v1: Vec[Idx, T], v2: Vec[Idx, T]): T = {
val v3 = Vec(size, i => r.mul(v1.get(i), v2.get(i)));
ops.reduce(r.add, r.zero, v3)
}
}
This is all we need, now we can instantiate Blas1 with different instances of Ring and VecOps.
```scala
// Computes the dot product on vectors of Int
val dotInt = new Blas1(new RingInt, new StaticVecOps).dot
// will compute the value 4
// Computes the dot product on vectors of Complex[Int]
val dotComplexInt = new Blas1(new RingComplex(new RingInt), new StaticVecOps).dot
// will compute the value Complex(-5, 13)
```
### 8.1.4 Modular Optimizations
We will now see how to unroll the dot product of a stages vector with a known vector. The simple solution is to lift the second vector elements use dotExprIntExpr like we did in the previous example. A shortcoming of this approach is that it will not be able to partially evaluate lifted values.
Instead, we will abstract the fact that we have a value of type T or Expr[T]. To achieve this we will define partially known values PV[T].
```scala
sealed trait PV[T] {
def expr(implicit l: Liftable[T]): Expr[T]
}
case class Sta[T](x: T) extends PV[T] {
...
}
case class Dyn[T](x: Expr[T]) extends PV[T] {
...
}
With this abstraction it is possible to define a Ring[PV[U]] that operates both on Ring[U] and Ring[Expr[U]]. In it is possible to perform constant folding optimizations statically known elements. In general, this ring can be composed with the rings for any given type.
```scala
class RingPV[U: Liftable](u: Ring[U], eu: Ring[Expr[U]]) extends Ring[PV[U]] {
val zero: PV[U] = Sta(u.zero)
val one: PV[U] = Sta(u.one)
val sub = ...
val mul = ...
}
```
// Computes the dot product on vectors of Int
val dotInt = new Blas1(new RingInt, new StaticVecOps).dot
// will compute the value 4
// Computes the dot product on vectors of Complex[Int]
val dotComplexInt = new Blas1(new RingComplex(new RingInt), new StaticVecOps).dot
// will compute the value Complex(-5, 13)
Using this ring we can optimize away all zeros from the dot product on vectors of PV[Int] and Complex[PV[Int]]. We do not use PV[Complex[Int]] as it would stage the complex before all optimizations can take place.
val RingPVInt =
new PV[Int](new PV[Int], new PV[Int])
// Staged loop of dot product on vectors of Int or Expr[Int]
val dotIntOptExpr =
new Blas1(RingPVInt, new StaticVecOps).dot
// dotIntOptExpr will generate the code for
// `{ arr(1) + arr(3) }
8.2 Case Study 2: Stream Fusion, to Completeness
List processing has been a key abstraction in functional programming languages [3]; an abstraction that is tightly coupled with the notion of lazy evaluation [14]. A list processing library is typically equipped with a set of operators to create lists, transform and consume them into scalar or other kinds of data structures. Data.List in Haskell, a lazy programming language, relies on writing the list processing functions using appropriate data structures, providing a set of rewrite rules to identify patterns in the code and then relying on the optimizing phase of GHC [29] to apply them [13]. The expected result is to compile a pipeline into a low-level, tight-loop, with zero abstraction costs such as no intermediate data structures and heap-allocated objects. For Scala and similar eager programming languages, stream libraries are simulating laziness on their own, either by relying on unfolds (pull-based streams) or again folds (push-based streams) [2].
Strymonas, based on unfolds [22] implements a staged stream library that fuses pipelines generating tight-loops. Strymonas comes in two flavors, one in Scala/LMS and one in BER MetaOCaml. In this section we discuss a third port of this library in Scala demonstrating that now Scala is equipped with the necessary abstractions to support Strymonas. There are two kinds of combinators in this design: a) regular and b) *Raw versions. The former have the familiar signatures we know and the latter are used to pattern match on the stream shape (Producer) of a downstream combinator manipulating its shape accordingly. The latter can be seen as code combinators that operate on a "suitable intermediate representation" [7]. Additionally, they use CPS internally to enable let-insertion in stateful combinators. Since Strymonas is not relying on control effects our system can fully support it. Stream pipelines in Strymonas can be either staged or used as a macro, as shown in Section 1.
A note on the performance of the generated code. The benchmarks in Figure 2 demonstrate that the use of macros elides the costs of runtime code-generation as expected. The macro and staged generated code were benchmarked by warming-up the code (to force JIT compilation). We also show the additional cost of staging and then running the resulting function. The overhead is the combination of compiling the code to bytecode, loading and JIT-compiling the code. Additionally on a cold JVM the first execution of run takes around 2.5 seconds to load the compiler. However, we omit it from the figure since it is amortized during warmup. Comparatively, macros do not incur such a performance penalty because the compiler is already loaded.
For our benchmarks we used the Java Microbenchmark Harness (JMH) [34] tool: a benchmarking tool for JVM-based languages that is part of the OpenJDK. The system we use runs an x64 OSX High Sierra 10.13.6 operating system on bare metal. It is equipped with a 4 GHz Intel Core i7 CPU (i7-6700K) having 4 physical and 8 logical cores. The total memory of the system is 16 GB of type 1867 MHz DDR3.
9 Related Work
Our system is heavily inspired by the long line of work by MetaML [38], MetaOCaml [6] and BER MetaOCaml [20]. We rely on the latter for most of our design decisions. We offer the capability of pretty printing generated code, but our system, contrary to BER MetaOCaml, compiles to native code first. In our case, native code (JVM bytecode) was simpler to implement since we rely on TASTY, the serialization format for typed syntax trees of Scala programs [27]. BER MetaOCaml offers the capability to programmers to process code values in their own way. We plan to make our system extensible in the same way but by relying on TASTY.
Modular Macros [44] offered a compile-time variant of BER MetaOCaml by introducing a new keyword to enable macro expansion. In their work they demonstrate that an existing staged library needs intrusive changes to sprinkle the code with the aforementioned keywords. In our case we just need one definition with a top-level splice and we reuse a staged library unchanged. Modular Macros is a separate project to BER MetaOCaml so the two techniques were not composed.
MacroML [12] pioneered compile-time version of MetaML showing at a theoretical level that the semantics of MetaML subsume macros; MacroML essentially translates macro programs to MetaML programs. Our work presents a confluence of macros and multi-stage programming in the same language (considering the imperative features of Scala, something left out from MacroML’s development). Even though this merge was not demonstrated in the original work by Ganz et al. we believe that their work provides useful insights for the future foundations of our system.
Template Haskell [33] is a very expressive metaprogramming system that offers support for code generation not only of expressions but also definitions, instances and more. Template Haskell used the type class lift to perform CSP, we used the same technique for our Liftable construct. Code generation in Template Haskell is essentially untyped; the generated code is not guaranteed to be well-typed. Typed Template Haskell, on the other hand, also inspired by MetaML and MetaOCaml offers a more restrictive view in order to pursue a disciplined system for code generation. Typed Template Haskell is still considered to be unsound under side effects [18], providing the same static guarantees as MetaOCaml. To avoid these shortcomings we permit no side effects in splice operations as well. We regard side effects as an important aspect of programming code generators. The decision to disallow effects in splices was taken because it was a simple approach to avoid the unsoundness hole of scope-extrusion. At the moment, code generators and delimited control (e.g., like restricting the code generator’s effects to the scope of generated binders [17]) was out of the scope of this paper but remains a goal of our future work.
F# supports code quotations that offer a quoting mechanism that is not opaque to the user effectively supporting analysis of F# expression trees at runtime. Programmers can quote expressions and they are offered the choice of getting back either a typed or an untyped expression tree. F# does not support multi-stage programming and currently lacks a code quotation compiler natively\(^7\). Furthermore, lifting is not supported. Finally, F# does not support splicing of types into quotations.
Scala offers experimental macros (called blackbox in Scala parlance) [4, 5]. The provided macros are quite different from our approach. Those macros expose directly an abstraction of the compiler’s ASTs and the current compilation context. Scala Macros require specialized knowledge of the compiler internals. Quasiquotes, additionally, are implemented on top of macros using string interpolators [32] which simplify code generation. However, the user is still exposed to the same complex machinery, inherited from them. Scala also offers macros that can modify existing types in the system (whitebox and annotation macros). They have proven dangerously powerful; they can arbitrarily affect typing in unconventional ways giving rise to problems that can deteriorate IDE support, compiler evolution and code understanding.
Lightweight Modular Staging (LMS) offers support for Multi-stage Programming in Scala[31]. LMS departs from the use of explicit staging annotations by adopting a type-based embedding. On the contrary, a design choice of our system is to offer explicit annotations along the lines of MetaML. We believe that programming with quotes and splices reflects the textual nature of this kind of metaprogramming and gives the necessary visual feedback to the user, who needs to reason about code-fragments. LMS is a powerful system that preserves the execution order of staged computations and also offers an extensible Graph-based IR. On the flip-side, two shortcomings of LMS, namely high compile times and the fact that it is based on a fork of the compiler were recently discussed as points of improvement [30].
Squid [28] advances the state of the art of staging systems and puts quasiquotes at the center of user-defined optimizations. The user can pattern match over existing code and implement retroactive optimizations modularly. A shortcoming in Squid, implemented as a macro library, is that free variables must be marked explicitly. Furthermore, contexts are represented as contravariant structural types\(^8\) which complicates the error messages.
10 Conclusion & Future Work
Metaprogramming has a reputation for being difficult and confusing. However with explicit Expr/Type types, quotes and splices it can become downright pleasant. A simple strategy first defines the underlying quoted or unquoted values using Expr and Type and then inserts quotes and splices to make the types line up. Phase consistency is at the same time a great guideline where to insert a quote or a splice and a vital sanity check that the result makes sense.
As future work we plan to study the formal properties of our system. Furthermore, we plan to complement it with a version of inline that not only provides \(\beta\)-reductions at the expression-level but also at the type-level.
Acknowledgments
We thank the anonymous reviewers of the program committee for their constructive comments. We gratefully acknowledge funding by the Swiss National Science Foundation under grants 200021_166154 (Effects as Implicit Capabilities) and 407540_167213 (Programming Language Abstractions for Big Data). We thank Liu Fengyun, Olivier Blanvillain, Oleg Kiselyov, Nick Palladinos and the Dotty contributors for discussions we had.
---
\(^7\)Splice types into Quotations—https://web.archive.org/web/20180712194211/https://github.com/fsharp/fslang-suggestions/issues/584
\(^8\)type Code[+Typ, -Ctx]
References
|
{"Source-Url": "https://infoscience.epfl.ch/record/257176/files/A%20Practical%20Unification%20of%20Multi-stage%20Programming%20and%20Macros.pdf", "len_cl100k_base": 13171, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 51127, "total-output-tokens": 17832, "length": "2e13", "weborganizer": {"__label__adult": 0.0003752708435058594, "__label__art_design": 0.00027823448181152344, "__label__crime_law": 0.00022590160369873047, "__label__education_jobs": 0.0004127025604248047, "__label__entertainment": 5.173683166503906e-05, "__label__fashion_beauty": 0.00013959407806396484, "__label__finance_business": 0.0001437664031982422, "__label__food_dining": 0.0003561973571777344, "__label__games": 0.0004458427429199219, "__label__hardware": 0.0005235671997070312, "__label__health": 0.0003235340118408203, "__label__history": 0.00018143653869628904, "__label__home_hobbies": 7.271766662597656e-05, "__label__industrial": 0.00026607513427734375, "__label__literature": 0.00021219253540039065, "__label__politics": 0.0002484321594238281, "__label__religion": 0.0004892349243164062, "__label__science_tech": 0.002696990966796875, "__label__social_life": 7.665157318115234e-05, "__label__software": 0.0029010772705078125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0003046989440917969, "__label__transportation": 0.0004458427429199219, "__label__travel": 0.00020623207092285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63913, 0.01988]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63913, 0.35743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63913, 0.85292]], "google_gemma-3-12b-it_contains_pii": [[0, 3676, false], [3676, 8931, null], [8931, 14089, null], [14089, 18952, null], [18952, 23655, null], [23655, 28000, null], [28000, 32283, null], [32283, 36691, null], [36691, 41275, null], [41275, 44407, null], [44407, 49125, null], [49125, 54808, null], [54808, 62514, null], [62514, 63913, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3676, true], [3676, 8931, null], [8931, 14089, null], [14089, 18952, null], [18952, 23655, null], [23655, 28000, null], [28000, 32283, null], [32283, 36691, null], [36691, 41275, null], [41275, 44407, null], [44407, 49125, null], [49125, 54808, null], [54808, 62514, null], [62514, 63913, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63913, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63913, null]], "pdf_page_numbers": [[0, 3676, 1], [3676, 8931, 2], [8931, 14089, 3], [14089, 18952, 4], [18952, 23655, 5], [23655, 28000, 6], [28000, 32283, 7], [32283, 36691, 8], [36691, 41275, 9], [41275, 44407, 10], [44407, 49125, 11], [49125, 54808, 12], [54808, 62514, 13], [62514, 63913, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63913, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
2064051017b3f4f4796b9f53cfed033afcf722b3
|
Automated Repair of Binary and Assembly Programs for Cooperating Embedded Devices
Eric Schulte∗ Jonathan DiLorenzo† Westley Weimer† Stephanie Forrest∗
∗ Department of Computer Science
University of New Mexico
Albuquerque, NM 87131-0001
{eschulte,forrest}@cs.unm.edu
† Department of Computer Science
University of Virginia
Charlottesville, VA 22904-4740
{jd9hz,weimer}@cs.virginia.edu
Abstract
We present a method for automatically repairing arbitrary software defects in embedded systems, which have limited memory, disk and CPU capacities, but exist in great numbers. We extend evolutionary computation (EC) algorithms that search for valid repairs at the source code level to assembly and ELF format binaries, compensating for limited system resources with several algorithmic innovations. Our method does not require access to the source code or build toolchain of the software under repair, does not require program instrumentation, specialized execution environments, or virtual machines, or prior knowledge of the bug type.
We repair defects in ARM and x86 assembly as well as ELF binaries, observing decreases of 86% in memory and 95% in disk requirements, with 62% decrease in repair time, compared to similar source-level techniques. These advances allow repairs previously possible only with C source code to be applied to any ARM or x86 assembly or ELF executable. Efficiency gains are achieved by introducing stochastic fault localization, with much lower overhead than comparable deterministic methods, and low-level program representations.
When distributed over multiple devices, our algorithm finds repairs faster than predicted by naïve parallelism. Four devices using our approach are five times more efficient than a single device between them. The algorithm is implemented on Nokia N900 smartphones, with inter-phone communication fitting in 900 bytes sent in 7 SMS text messages per device per repair on average.
Categories and Subject Descriptors D.2.3 [Software Engineering]: Coding Tools and Techniques; D.2.5 [Software Engineering]: Testing and Debugging; D.3.2 [Language Classifications]: Macro and Assembly Languages; I.2.8 [Artificial Intelligence]: Heuristic methods
General Terms Experimentation, Languages
Keywords Automated program repair, evolutionary computation, fault localization, assembly code, bytecode, legacy software
1. Introduction
Automated software repair is an emerging research area in which algorithmic and heuristic approaches are used to search for, generate, and evaluate candidate repairs for software defects. It has received attention in programming languages (e.g., [15]), operating systems (e.g., [28]) and software engineering (e.g., [33, 35]) venues. Automated repair methods have been applied to multiple classes of software engineering and security defects (e.g., [33]) including hard-to-fix concurrency bugs [15], have won human-competitive awards [9], and automatic repairs have been successfully pitted against DARPA Red Teams to demonstrate quality [26]. With bug repair dominating software development costs (90% of the total cost of a typical software project is incurred after delivery [23]) such automated techniques are of increasing importance.
However, few automated repair techniques apply to resource constrained embedded systems, instead targeting desktop client software such as Firefox [15, 26], server software such as MySQL [15] or web servers, or design-by-contract Eiffel programs [33]. Given the tight coupling between embedded software and the unique execution environments in which they operate, desktop testing and repair tools are often insufficient. Current research in this field is increasingly out of step with the needs of industry, in which embedded microprocessors account for more than 98% of all produced microprocessors [4]. One example of a wide-reaching embedded defect was the “Zune bug,” in which 30GB Microsoft Zune Media Players froze up on the last day of a leap year [2]. In addition, previous repair techniques that apply to binaries [26] or assembly language [24] have uniformly targeted Intel x86, despite “the widespread dominant use of the ARM processor in mobile and embedded systems” [8].
Evolutionary computation (EC) is a stochastic search method based on Darwinian evolution, which has been applied to the automated repair problem. Like other search methods, EC is relevant when it is easier to evaluate a candidate solution than to predict the form of a correct solution [11]. Although EC techniques do not yet synthesize large programs in traditional languages, they can repair a wide variety of real defects at the source-code level in real-world software applications [20].
We propose to repair assembly language and executable binary programs directly using EC across multiple architectures, including embedded and mobile systems. Resource constraints in embedded
1 In this paper we use the term genetic algorithm (GA) interchangeably with evolutionary computation (EC). GAs and Genetic Programming (GP) are two concrete realizations of the general EC framework. GAs are typically defined over linear strings and GP typically refers to tree-based executable representations of programs. Our method uses linear strings that are executable.
and mobile systems preclude the use of existing techniques. For example, it is not feasible to trace all operands that CPU instructions read or write (as in Sec. 2.1.1) or even to trace all instructions visited (as in [33]). A pre-set, unified locking discipline may not be available (as used by [15]). Source code with debugging information may not be available, so statements and abstract syntax trees cannot be accurately identified to reduce search-space size (as used by [33]). Furthermore, formal pre- and post-condition annotations are unlikely to be present (as used by [33]). Finally, most embedded devices do not ship with a complete compiler toolchain, and a deployed device may not have the storage or RAM to support its original mission together with a heavyweight repair framework that uses the GCC toolchain. The closest assembly-level repair work is Clearview [26].
The main contributions of this paper are as follows:
- An architecture-independent representation and stochastic fault localization algorithm that supports automatic program repair at the assembly and binary levels. We demonstrate applicability to x86 and ARM processors. Although sampling for fault localization is not new, its application and evaluation in the domain of program repair is, and our technique is an order-of-magnitude faster and of comparable accuracy to deterministic methods.
- An empirical demonstration that disk space and memory requirements are 95.22% and 85.71% smaller, respectively, than similar methods, allowing application to mobile and embedded systems.
- An empirical comparison across different levels of program representation with respect to EC’s success and efficiency at finding repairs.
- A demonstration that most source-statement-level repairs reported previously [35] can also be carried out at the assembly and binary level with a 62.43% average decrease in the time required to perform a repair between the AST and ELF levels.
- A distributed GA that performs automated program repair across multiple embedded devices. Four devices using our approach are five times faster than a single device for our repair benchmarks.
- Empirical demonstration of the repair method on Nokia N900 smartphones (600 MHz ARM Cortex-A8 CPU, 250 MB memory). Using two phones, the distributed algorithm conducts repairs with just under 900 bytes sent (or 7 SMS messages) per participant per repair.
2. Background
In automated software repair (e.g., [26, 33, 15]), defects are corrected by searching over and evaluating a solution space of possible repairs. The set of repairs (fixes) may be generated from random variations of the instructions in the program, as in GenProg [20]; from formal source code annotations, as in AutoFix-E [33]; or from a pre-defined library, such as the locking changes used by the AFix project [15] or the clamp-variable-to-value operation of Clearview [26].
Our work is based on the EC approach [9, 33, 36]. EC mimics Darwinian evolution in a computational algorithm that searches for candidate solutions to a given problem [14]. In the program repair context, a population of program variants is generated by applying random mutations to the original buggy source code. These variants are then compiled using the program’s build toolchain, and the resulting executables are evaluated against the program’s test suite. The test suite assesses the goodness, or fitness, of each variant, and the fitness value is used to select which variants will remain in the population, undergoing additional mutations. In addition to mutation, the algorithm uses crossover to exchange instructions between two variants, producing a recombination of partial solutions. The process iterates until a variant passes all test cases and also fixes the bug, or until an upper time limit is reached.
To reduce the size of the search space of possible repairs, mutation and crossover operators are limited to re-organizing statements present in the original program. No new statements are created, and sub-statement program elements, such as expressions, are not changed directly. Fault localization (e.g., [16]) focuses the mutation operators on statement nodes executed on the buggy input. To collect this information, an execution trace is recorded from an instrumented version of the program run against the test suite.
We extend this previous work to run directly on compiled assembly files and linked ELF executables, and we introduce a stochastic method of fault localization appropriate for these lower-level representations. These extensions remove the requirement for source code availability and the need for compilation and linking (for the ELF level) as part of the search process. It is applicable to arbitrary assembly and ELF programs rather than only C-language programs and enables repairs that are not expressible at the C statement level.
3. Technical Approach
Build processes generate intermediate representations, each of which is a possible target for automated repair, and each of which poses unique challenges for the repair method. These challenges include: the form of the representation and mutation operators, the granularity of fault localization information, and the tools required to express a representation as an executable program. The following subsection reviews the repair framework at a high level, which mirrors the AST algorithm [35] [36]. We summarize the technical and algorithmic aspects of ASM and ELF repairs, including the resources required by each level of representation, the effects of representation on mutation operations, and the requirements for expression as executable programs.
3.1 Evolutionary Repair Algorithm
The algorithm shown in Figure 1 applies to source-level ASTs, compiled ASM, and compiled and linked ELF repairs. Section 4.1 reports values for the parameters, such as popSize, used in our experiments. The next subsections present the stochastic fault localization algorithm, ASM and ELF representations, and mutation and crossover operators.
The overall structure in Figure 1 is iterative. Tournament selection selects variants for the next iteration (generation) (Lines 8, 12); retained variants exchange sub-strings to produce offspring (Line 9); all variants are mutated (Line 16), and the process repeats until a solution is found (Line 18).
3.2 Stochastic Fault Localization
In large programs, it is reasonable to assume that most parts of the program are not related to a given bug [16], and fault localization methods often target code executed on the bug-inducing input. Accurate fault localization is critical for targeting the repair operators [35] Fig. 5 and is an important factor in running time [36] Fig. 1.
Our earlier work recorded entire sequences, or paths [35], of executed statements using various weighting factors [16], which required expensive program instrumentation or runtime harnesses. A key challenge is obtaining accurate enough information for guiding automated repairs, while maintaining the efficiency required for embedded devices.
Input: Program $P$ to be repaired.
Input: Set of positive testcases $p \in PosT$.
Input: Set of negative testcases $n \in NegT$.
Output: Repaired program variant $V$.
1. $Path_{PosT} \leftarrow \bigcup_{p \in PosT} \text{locations visited by } P(p)$
2. $Path_{NegT} \leftarrow \bigcup_{n \in NegT} \text{locations visited by } P(n)$
3. $Path \leftarrow \text{set weights}(Path_{NegT}, Path_{PosT})$
4. $Pop \leftarrow \text{initial population}(P, pop\_size)$
5. repeat
6. $NewPop \leftarrow \emptyset$
7. for $i = 1 \rightarrow (pop\_size \times \text{cross\_percent})$ by 2 do
8. $V_1, V_2 \leftarrow \text{tournament}(Pop), \text{tournament}(Pop)$
9. $NewPop \leftarrow NewPop \cup \{\text{tournament}(V_1, V_2)\}$
10. end for
11. for $i = 1 \rightarrow (pop\_size \times (1 - \text{cross\_percent}))$ do
12. $NewPop \leftarrow NewPop \cup \{\text{tournament}(Pop)\}$
13. end for
14. $Pop \leftarrow \emptyset$
15. for all $(V, Path_V) \in NewPop$ do
16. $Pop \leftarrow Pop \cup \{\text{fitness(mutate}(V, Path_V))\}$
17. end for
18. until $\exists (V, Path_V, f_V) \in Pop \mid f_V = \text{max\_fitness}$
19. return $V$
Figure 1. High-level pseudocode for EC-based automatic program repair, which applies to all levels of representation. Representation-specific subroutines such as fitness($V$) and mutate($V, Path_V$) are described subsequently.
Many traditional code profilers (e.g., gcov) are language specific and rely on the insertion of assembly instrumentation consuming unacceptable storage and run-time resources. For example, instrumentation of the flex benchmark at the C-language level required storing sequential ordering information from about 443,399 raw statement visits, with program instrumentation increasing the CPU run-time by a factor of 100. Direct extensions such as deterministic sampling of the program counter (e.g., using ptrace) performed poorly in our preliminary work.
Sample Program Counter Raw Sample Counts Smoothed Sample Counts
Program Counter memory addr. to instruction
CPU
Figure 2. Stochastic Fault Localization (raw and smoothed samples from the merge-cpp benchmark).
To address these constraints, we propose a sampling approach to fault localization (Figure 3), which is applicable to arbitrary assembly and ELF programs and dramatically reduces resource requirements compared to earlier work. We sample the program counter (PC) across multiple executions of the program. These sampled memory addresses are then mapped to bytes in the .text section of ELF files or to specific instructions in ASM files. The result is a count of the total number of times each instruction in the program was sampled. Stochastic sampling only approximates control flow and is vulnerable to gaps, elided periodic behavior, over-sampled instructions, etc. To overcome these limitations, we apply a 1-D Gaussian convolution to the sampled addresses with a radius of 3 assembly instructions, s.t. the smoothed value of each sample $G(x)$ is a weighted sum of the raw value $F(x)$ of itself and its 6 nearest neighbors.
$$G(x) = \sum_{i=-3}^{3} F(x + i) \times \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}x^2}$$
This simple transformation increases the chance that instructions that are executed but not directly sampled will be counted, and it improves the correlation between stochastic and deterministic samples (Section 4.2).
Gaussian convolution is an accepted method of smoothing data to reduce detail and noise in fields such as computer vision. However, to our knowledge it has not previously been applied to fault localization. Section 4.2 compares the fault localization information produced by our stochastic sampling to a fully deterministic program trace.
Samples are collected from multiple runs of each of the program’s tests. Despite multiple executions of each test, the CPU time required is less than comparable deterministic techniques (Section 4.4). The union of every instruction included in the fault localization from the positive tests, and the union of every instruction included in the fault localization from the negative tests are collected into the positive and negative fault localization respectively. These sets of are then used to guide program repair.
3.3 ASM and ELF Program Representations
In contrast to the tree-structured (nested) source code representation, our assembly and ELF level representations use a linear sequence of instructions e.g., as produced by gcc `-S` or objdump `-d -j .text` respectively. Candidate repairs are generated by swapping, inserting or deleting instructions. Pseudo-operations such as deterministic sampling of the program counter (e.g., using ptrace) performed poorly in our preliminary work.
3.3.1 Genetic Operators
ASM and ELF representations are modified using four operators designed for the linear representations: three mutation types and one crossover type (Figure 3). Instructions appearing in the negative localization are ten times more likely to be mutated than instructions appearing in both. Linked ELF executables often contain hard-coded memory addresses included as literals in the program .text. Since there is no general way to distinguish an integer literal from an address literal, our mutation operators occasionally create invalid addresses (recent work may enable improved treatment of literal addresses in the future).
To minimize disturbance of literal addresses and information outside of the .text section, the ELF mutation and crossover operators attempt to maintain the length of the linear instruction array in bytes and minimize changes to the in memory addresses of existing instructions.
Delete: A single instruction selected by weight is removed. At the ELF level, every byte of the deleted instruction is replaced with a nop byte.
Insert: A single instruction selected at random is copied to a new location selected by weight. At the ELF level, changes to offsets
---
are minimized by removing a number of nearby nop instruction equal to the size of the inserted code. In the rare case where there are insufficient nop instructions, the size of the .text section increases, which usually reduces individual fitness.
Swap: An instruction selected at random is swapped with an instruction selected by weight. This operation is naturally length-preserving and requires no special treatment at the ELF level, although swapping two instructions of different lengths may alter instruction offsets in the short region between those two locations.
Crossover: Given two parent programs (Figure 3, line 11), a single index less than the length of the shorter parent program is selected at random. Single-point crossover is performed, concatenating the instructions from one parent up to the selected index, and the instructions from the other parent after the selected index. This operation produces two new variant programs. At the ELF level, the index is selected such that the number of bytes before the index is the same in each parent.
3.3.2 Fitness Evaluation
Before fitness evaluation, the variant must be converted to an on-disk executable. Generating an executable from an ASM representation requires writing the array of ASM instructions (and any assembly directives or pseudo-operations) to disk and assembled/linked using instructions taken from the program’s original build sequence. The ELF representation is written directly to a binary executable ELF file on disk without any elements of the software project’s build toolchain.
The executable is then run against the program’s test suite and negative test case. To protect against dangerous behavior by the random variants, a lightweight sandboxing solution was readily constructed from standard Linux utilities (e.g., ulimit and chroot). All experimental results presented in the paper include the cost of sandboxing. Repairs to kernel-level embedded code that manipulate hardware directly would require different methods. Fitness is assessed by running the executable on all test cases, computing a weighted average score based on the passing test cases, where negative test cases count twice as much as positive ones.
3.4 Motivating Example
Consider the program for exponentiation shown in Figure 4, which illustrates the expressive power of ASM- and ELF-level repairs.
It contains a bug in which the two arguments are assigned to the wrong variables.
3.5 Distributed Genetic Algorithm (DGA)
The methods described in Section 3.2 and Section 3.3 enable automated repair in devices with severely limited disk and memory resources. For example, the Nokia smartphones used in our experiments provide only 256 MB of memory. Also, such devices may not be fast enough to find and evaluate a successful repair in a reasonable amount of time on their own. To address these concerns, we present a distributed genetic algorithm (DGA) that allows multiple devices to collaborate in finding a repair.
As illustration, we consider a group of mutually trusting smartphones cooperating to repair the same bug. The repair may be found more quickly if the search burden can be distributed over many devices. This is a plausible use case given the large number of homogeneous installs of smart phone applications.
The DGA is considered successful, compared to naïve parallelism in which all devices work independently, if the total number of fitness evaluations required to find a repair is reduced, thus reducing time and power costs. A second design goal is to minimize network communication. In the smartphone scenario, we assume that communication occurs via infrequent SMS messages, rather than high-power, high-bandwidth links.
Distributed GAs are based on the insight that separate genetic populations sharing a small amount of data can often outperform a single population of the same total size. Each participating device maintains its own population of variants and periodically shares high-fitness variants with other devices. GAs are known to
be more effective when they operate over genetically diverse pop-
ulations [13], and there is a large literature on the problem of “pre-
mature convergence” in GAs (e.g., [10]). Because the search in
each sub-population is dominated by local high-fitness variants, di-
verse sub-populations on each device can explore different parts of
the search space in parallel. Performance is thus enhanced by max-
imizing diversity among the sub-populations stored on each device.
We hypothesize this to contribute to our DGA’s superlinear reduc-
tion in fitness evaluations over naive parallelism (see Section 4.5).
Two novel aspects of the DGA for software repair are: splitting
the fault-localization search space among devices, and diversity-
based migration.
3.6 Splitting the Search among Participants
To maximize sub-population diversity, we use fault localization
information to constrain the search such that each device explores a
different region of the search space. Recall that the repair algorithm
modifies only those parts of the program identified by fault local-
ization (Section 3.3) and that positive fault localization instructions
are weighted differently from negative instructions with respect to
choosing mutation locations. Let S be the ordered list of atomic el-
ements of the program representation (e.g., assembly instructions, or
groups of bytes) identified by fault localization over the positive
test cases. Given N devices, we assign each device responsibility
for two contiguous sub-sequences of S of size \( k = \frac{|S|}{N} \), although
each device contains the entire program and a list of the statements
visited by negative test cases only (Section 3.2). We hypothesize
that contiguous statements are likely relevant to the same repair.
Formally, if \( s_j \in S \) is the \( j^{th} \) element of \( S \) counting in representa-
tion order, then device \( i \) of \( N \) only modifies elements of \( S_i \) where:
\[
S_i = \left\{ s_j \in S \mid \text{ visited}_{on_{negative_tests}}(s_j) \leq (i + 2) \mod N \right\}
\]
Note that since the insert and swap operators take one operand
from the fault localization weighting and one at random, this divi-
sion does not formally partition the search space, but it does divide
the work of searching it into slightly overlapping parcels.
3.7 Diversity-based Migration
Each device periodically communicates a subset of its current pop-
ulation to a subset of the other devices. We hypothesize that search
performance is improved if diverse variants are shared. We mea-
sure diversity between variants as the number of unique edits in
their representations. An iterative calculation identifies the \( n \) most
diverse variants in a subpopulation. Each variant is assigned one
neighbor (line 7). The incoming variants are added to each node’s
subpopulation and are subject to selection in the next generation.
The process then repeats. When participant finds a repair, the re-
pair is sent in an out-of-band message, and the process terminates
early (not shown).
The number of variants exchanged is at most \( N \times d \times \text{gen} \). Since \( d \)
is chosen to be small, and the number of generations is small,
communication cost is effectively linear in the number of partici-
pants. Because all variants share a common ancestor in the original
program, only the edit history needs to be communicated (cf. [20,
Sec. III-B]). For example, a variant created by deleting instruction 3
and then swapping instructions 1 and 2, can be serialized in a form
such as “d(3)s(1,2)”. In practice, we encode the operation (delete,
insert, or swap) in one byte and operation-specific operands in one
or two 16-bit integers. Fitness is included as a final byte. This en-
coding assumes self-contained compact descriptions of edits, and
thus does not admit crossover.
4. Experimental Results
This section presents empirical results evaluating the ASM and
ELF representations and the DGA. The results show the following:
1. Stochastic fault localization closely approximates the determin-
istic approach (Section 4.2).
2. Repair success at the ASM and ELF representation levels is
similar to that reported previously for ASTs (Section 4.3).
3. The ASM and ELF representations, together with stochastic
fault localization, have small resource footprints, suitable for
running on embedded devices (Section 4.4).
4. The DGA increases success rates while reducing total fitness
evaluations required to find a repair (Section 4.5).
4.1 Experimental Setup
Table 1 lists the benchmark defective programs evaluated in this pa-
er. For ease of comparison they are taken from earlier work [15]
with two additions. merge sort was added to evaluate the stochas-
tic fault localization algorithm on a test suite with full assembly
statement coverage, and merge-cpp was added to demonstrate a
language other than C. Each program comes with a regression test
suite, used to validate candidate repairs, and at least one test case
indicating a defect. These programs have on average \( 3.69 \times \) more
assembly instructions and \( 9.55 \times \) more bytes in the .text section
of ELF files than lines of source code.
Algorithm: Distributed Repair
Input: Program \( P \) to repair.
Input: Set of positive testcases \( p \in PosT \).
Input: Set of negative testcases \( n \in NegT \).
Input: Number \( d \) of variants per migration.
Input: Number \( N \) of networked participants.
1: \( Subpop \leftarrow initial_{population}(P, pop\_size) \)
2: \( for \ generation = 1 \rightarrow \text{gen} \) do
3: \( Id \leftarrow \text{temporary device specific network identifier} \)
4: \( Subpop \leftarrow \text{run}(Subpop, PosT, NegT) \) (Fig. 1, 6–17)
5: \( Migrants \leftarrow \text{div select}(Subpop, d) \) (Sec. 3.7)
6: \( \text{send}(\text{succ}(Id), Migrants) \)
7: \( Migrants \leftarrow \text{receive}(\text{pred}(Id)) \)
8: \( Subpop \leftarrow Subpop \cup Migrants \)
9: end for
Figure 5. Distributed genetic algorithm (DGA) for program repair. The
search is distributed among participants that share information (diverse
high-fitness program variants) after each generation.
Table 1. Benchmark programs used in experiments, taken from Weimer et al.\textsuperscript{35} with the addition of merge. Program size is reported as follows: Lines of code (LOC) in the original C source, LOC in the assembly files (x86, as produced by gcc -S), and size (in bytes) of the .text sections of the x86 ELF files. Each program has a regression test suite and a failing test case indicating a fault.
<table>
<thead>
<tr>
<th>Program</th>
<th>C LOC</th>
<th>ASM LOC</th>
<th>ELF Bytes</th>
<th>Program Description</th>
<th>Defect</th>
</tr>
</thead>
<tbody>
<tr>
<td>atris</td>
<td>9578</td>
<td>39153</td>
<td>131756</td>
<td>graphical tetris game</td>
<td>local stack buffer exploit</td>
</tr>
<tr>
<td>ccrypt</td>
<td>4249</td>
<td>15261</td>
<td>18716</td>
<td>encryption utility</td>
<td>segfault</td>
</tr>
<tr>
<td>deroff</td>
<td>1467</td>
<td>6330</td>
<td>17692</td>
<td>document processing</td>
<td>segfault</td>
</tr>
<tr>
<td>flex</td>
<td>8779</td>
<td>37119</td>
<td>73452</td>
<td>lexical analyzer generator</td>
<td>segfault</td>
</tr>
<tr>
<td>indent</td>
<td>5952</td>
<td>15462</td>
<td>49384</td>
<td>source code processing</td>
<td>infinite loop</td>
</tr>
<tr>
<td>look-s</td>
<td>205</td>
<td>516</td>
<td>1628</td>
<td>dictionary lookup</td>
<td>infinite loop</td>
</tr>
<tr>
<td>look-u</td>
<td>205</td>
<td>541</td>
<td>1784</td>
<td>dictionary lookup</td>
<td>infinite loop</td>
</tr>
<tr>
<td>merge</td>
<td>72</td>
<td>219</td>
<td>1384</td>
<td>merge sort</td>
<td>improper sorting of duplicate inputs</td>
</tr>
<tr>
<td>merge-cpp</td>
<td>71</td>
<td>421</td>
<td>1540</td>
<td>merge sort (in C++)</td>
<td>improper sorting of duplicate inputs</td>
</tr>
<tr>
<td>s3</td>
<td>594</td>
<td>767</td>
<td>1804</td>
<td>sendmail utility</td>
<td>buffer overflow</td>
</tr>
<tr>
<td>uniq</td>
<td>143</td>
<td>421</td>
<td>1288</td>
<td>duplicate text processing</td>
<td>segfault</td>
</tr>
<tr>
<td>units</td>
<td>496</td>
<td>1364</td>
<td>3196</td>
<td>metric conversion</td>
<td>segfault</td>
</tr>
<tr>
<td>zune</td>
<td>51</td>
<td>108</td>
<td>664</td>
<td>embedded media player</td>
<td>infinite loop</td>
</tr>
<tr>
<td>total</td>
<td>31862</td>
<td>117682</td>
<td>304288</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
We used the following GA parameters: population size $\text{popsize} = 1000$; maximum number of fitness evaluations in a trial $\text{evals} = 5000$, mutation rate $\text{mut} = 1.0$ per individual per generation and crossover rate $\text{crossover} = 0.5$ crossovers per individual per generation.
Most experiments were run on a machine with 2.2 GHz AMD Opteron processors and 120 GB of memory. The wall-clock evaluation of the DGA was conducted on a single server-class machine, with 3.0 GHz Intel Xeon CPUs and 15.6 GB of memory. Cell phone experiments were conducted on Nokia N900 smartphones, each of which features a 600 MHz ARM Cortex-A8 CPU and 256 MB of mobile DDR memory.
4.2 Fault Localization Evaluation
We collected program counter samples using oprofile\textsuperscript{21}, a system-wide profiler for Linux systems, which doesn’t alter the profiled program, runs on embedded devices, and when appropriately configured, has minimal impact on system-wide performance.
We process the samples as described in Section 3.2, explicitly comparing to deterministic approaches (below) and evaluating utility in the context of program repair in Section 4.3. The direct comparison used merge sort, which is small and exhaustively tested with 100% statement and branch coverage, as well as deroff, which is larger and has less-complete test coverage. The stochastic and deterministic traces taken from the failing test cases of both programs are shown in Figure 6. Inputs for the failing test case exercise the bugs described in Table 1.
Ten stochastic samples and one deterministic sample were collected for both programs. We find high correlations of 0.96 (merge) and 0.65 (deroff) among the stochastic samples, indicating consistency across samples. We find lower correlations of 0.61 (merge) and 0.38 (deroff) between the naive stochastic (no Gaussian convolution) and deterministic samples, which increase to 0.71 (merge) and 0.48 (deroff) after Gaussian convolution, indicating that smoothing provides significant improvement.
4.3 ASM and ELF Repair Success
Table 2 compares ASM and ELF to AST, averaging over 100 trials of the standard (non-DGA) algorithm for each tested configuration. Memory usage, which varies insignificantly across runs, was calculated from a single run.
Our Nokia smartphones have 256+768 MB of RAM plus swap. In that environment, only 8 of 13 programs can be repaired at the AST level (i.e., memory is exhausted or the repair fails) compared to 10 at the ASM level and 11 at the ELF level.
We expected the repair process to be much less efficient, especially at the ELF level, both in terms of success rate and time to first repair. Generally, EC searches are more challenging in larger search spaces—more locations to choose for a mutation and more possible instructions to choose from when performing an insertion. However, differences among the average percentage of successful repairs across representations are not large, with values of 65.83%, 70.75% and 78.17% for ELF, ASM and AST respectively. Using Fisher’s Exact test to compare success rates we find no significant differences among them.
- Figure 6. Fault localization in program address space. Results for stochastic sampling shown in black identify program regions similar to those found deterministically, shown in gray.
difference between AST and either ASM or ELF with p-values of 1 (between AST and ASM), and 0.294 (between AST and ELF). This suggests that automated repair at the ASM and ELF levels is a practical alternative to source-level repair.
Some bugs are more amenable to repair at particular levels of representation. For example, atris and units are repaired most easily at the AST level, indent at the ASM level, and merge sort at the ASM and ELF levels.
The atris repair involves deleting a call to getenv. At the AST level this requires a single deletion, while at the ASM level, three contiguous instructions must be deleted and the repair was not found. The repair was found at the ELF level, and all of the five repairs found were unique, each involving from 3 to 7 accumulated mutation operations.
The merge repair involves replacing an if statement with its else branch. At the AST level this requires swapping exactly those two statements, which is 1 of 4900 possible swap mutations. At the ASM and ELF levels, modification of a single comparison instruction suffices to repair the program. This is one of only 218 such changes possible and is much more easily found by our technique.
The “Expected Fitness Evaluations” column reports the expected number of fitness evaluations per repair.
\[
\text{expected} = \text{fit}_s + (\text{run}_s - 1) \times \overline{\text{fit}} \quad \text{where} \quad \overline{\text{fit}} = \text{average evaluations per successful run} \\
\text{fit}_s = \text{average evaluations per failed run} \\
\text{run}_s = \text{average runs per success}
\]
Given that repair time is dominated by fitness evaluation, which includes compilation and linking at the AST level, and linking at the ASM level, and that for all programs but units (an outlier in this regard) the expected number of evaluations is roughly equivalent between levels of representation, we conclude that, when repairs are possible, the repair process is usually more efficient at the ASM and ELF levels than at the AST level.
4.4 Resource Requirements
We desire a repair algorithm that can run within the resource constraints of mobile and embedded devices. We consider three key constraints: CPU usage and runtime, memory requirements, and disk space requirements. Section 4.5 also evaluates network communication for the DGA.
CPU Usage. Runtime costs associated with GA bookkeeping (e.g., sorting variants by fitness, choosing random numbers, etc.) are typically dwarfed by the cost of evaluating fitness. For example, on an average run of deoff, bookkeeping accounted for only 13.5% of the runtime. The primary costs are computing fault localization information and fitness evaluation (including compilation and linking depending upon the representation used).
Stochastic fault localization requires from 50 to 500 runs of the original unmodified program. Importantly, the absolute running time determines the number of required executions, so slow programs require fewer executions and only quickly terminating programs require more than 50 executions. By contrast, AST-level repairs require compilation of an instrumented program with a 100× slowdown per run (Section 3.2). Related executable-level approaches introduce a 300× slowdown to compute fault localization information [26] Sec 4.4, and ptrace full deterministic tracing incurs a 1200× slowdown. Our fault localization approach is an order-of-magnitude faster than these previous approaches.
After fault localization is complete, both ASM and ELF representations have lower fitness evaluation costs than the AST-level because compilation and linking are not required. However, the search problem for ASM and ELF is potentially much larger than for AST (compare the size columns in Table 1), suggesting they may need more fitness evaluations to find a repair. If the time to conduct a repair at the AST level is normalized to 1.0, on average ASM repairs take 7.22× and ELF repairs take 0.38×. Using a Mann-Whitney U-test the runtime difference between ELF and AST is a significant improvement, with a p-value of 0.055. While the ASM-level repair is slower, this can be mitigated through collaboration across multiple devices (Section 4.5).
Memory. Memory utilization is important for mobile and embedded devices. The earlier work was conducted on server machines with 8 GB [26] to 16 GB of RAM [35]. By contrast, the Nokia N900 smartphones we consider as indicative use cases have 256 MB Mobile DDR—an order of magnitude less.
Table 2. Resource comparison of abstract syntax tree (AST) assembly source (ASM) and ELF binary (ELF) repair representations. “Memory” reports the average max memory required for a repair (as reported by the Unix top utility). “Runtime” reports the average time per successful repair in seconds. “% Success” gives the percentage of random seeds for which a valid repair is found within 5000 runs of the full test suite. “Expected Fitness Evaluations” counts the expected number of evaluations per repair (Equation 1). † Indicates that there were no successful repairs in 5000 runs of the full test suite. “Memory” indicates memory requirements in excess of the 1024 MB Mobile DDR—an order of magnitude less.
Table 3 reports the memory used (in MB) for repairs at the AST, ASM and ELF representations, showing that ASM requires only about 53.91% of the memory of a source-based representation, while ELF is significantly smaller, requiring only 14.29% of the memory. We attribute the low requirements for ELF to the ELF parser we used, which stores only the .text section of ELF binaries in memory.
**Disk space.** Beyond the subject program and its test suite, disk usage is composed of two main elements: the repair tool and the build suite of the program to be repaired. The size of these elements varies greatly with representation level of the repair. For example, repairs at the ELF level do not require the build tool-chain of the original program, enabling repair of embedded programs that are cross-compiled and cannot be built locally. We next discuss the disk space requirements at all three levels.
AST requires the source code and build tool chain of the original program. Our baseline comparison, GenProg, takes 23 MB on disk (including the tool itself, the gcc compiler and header files, the gas assembler, and the 1d linker).
ASM requires only the assembly code, assembler, and linker. This is a significantly lighter build requirement. Our ASM implementation is currently incorporated into the AST repair framework [15] to ensure a controlled environment for comparison. It requires 12 MB on disk (including the tool itself, the gas assembler, and the 1d linker).
ELF requires only a compiled executable. Like ASM, our prototype is a modification of the AST-level repair framework, replacing the source-code parser with an ELF parser. It requires only 1.10 MB on disk, an order of magnitude decrease compared to AST.
As one concrete example of the resource limitations of embedded devices, the Nokia N900 smartphone ships with 256 MB of NAND flash storage (holding the Maemo Linux kernel and bootloader, etc., with about 100 MB free), and a 32 GB eMMC store holding a 2GB ext3 partition, 768 MB of swap, and about 27 GB of free space in a vfat partition. The vfat partition is unmounted and exported whenever a USB cable is attached to the device, making it unsuitable for a deployed system repair tool. Linux packages install to the NAND flash by default, quickly exhausting space. Repartitioning is possible but uncommon for casual users. Thus, even though the device claims 32 GB of storage, significantly less is available for a stable repair tool. Although these are merely implementation details, we argue that such conditions and the need to minimize the on-disk footprint are indicative of many embedded devices.
### 4.5 Distributed and Embedded Repair Results
Table 3 summarizes the performance of the DGA with the number of nodes ranging from one to four. The “% Success” column lists the fraction of trials for which a successful repair is found, normalized so that a single non-networked participant has 1.0. Overall success rate improves by 13% from one to four participants because they share diverse variants and collaborate on the search by exploring different portions of the program space.
In most instances, the time to find the first repair is critical. The “Expected Fitness Evals” column measures that effort in a machine-independent manner (Section 4.3). In practice, fitness evaluations (which require repeatedly running the program test suite) account for the majority of algorithmic runtime. The number of fitness evaluations required to find a repair drops by a factor of 5 and the average standard deviation by 62%. Each fitness evaluation includes the time to run the test suite of the subject program.
<table>
<thead>
<tr>
<th>Program</th>
<th>% Success</th>
<th>Expected Fitness Evals</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>→ #nodes →</td>
<td>→ #nodes →</td>
</tr>
<tr>
<td>atris</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.53 0.40 0.27</td>
</tr>
<tr>
<td>ccrypt</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.62 0.27 0.24</td>
</tr>
<tr>
<td>deroff</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.58 0.44 0.32</td>
</tr>
<tr>
<td>flex</td>
<td>1 1.13 1.87 2.07</td>
<td>1 0.87 0.46 0.40</td>
</tr>
<tr>
<td>indent</td>
<td>1 1.04 1.04 1.04</td>
<td>1 0.25 0.16 0.10</td>
</tr>
<tr>
<td>look-s</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.50 0.55 0.29</td>
</tr>
<tr>
<td>look-u</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.76 0.37 0.32</td>
</tr>
<tr>
<td>merge</td>
<td>1 1.69 2.14 2.31</td>
<td>1 0.43 0.22 0.18</td>
</tr>
<tr>
<td>s3</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.47 0.31 0.24</td>
</tr>
<tr>
<td>uniq</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.49 0.46 0.31</td>
</tr>
<tr>
<td>units</td>
<td>1 1.27 1.33 1.33</td>
<td>1 0.32 0.11 0.07</td>
</tr>
<tr>
<td>zune</td>
<td>1 1.00 1.00 1.00</td>
<td>1 0.47 0.36 0.27</td>
</tr>
</tbody>
</table>
As a baseline, we also measured the performance of naïve parallelism (i.e., using two Nokia cell phones to run two separate copies of the repair algorithm and stopping when the first repair is found) to demonstrate that the distributed algorithm is responsible for the performance gains. For a meaningful comparison, we focused on benchmarks that take more than one generation to repair (i.e., flex, indent, look-s and look-u) and thus reached the distributed portion of the DGA algorithm described in Figure 5. Over 100 trials, two nodes running DGA use only 442 total fitness evaluations per repair compared to 848 for naïve parallelism (not counting evaluations performed after the first repair is found). On two nodes, DGA thus requires 48% fewer fitness evaluations than naïve parallelism.
Finally, we calculate the network bandwidth consumed by DGA. Recall that at the end of each generation, each machine sends \( d = \text{popsize}/20 \approx 50 \) diverse variants to a neighbor. After the first generation, each variant has one edit in its history (\( m_{20} = 1.0 \)). One third of all mutations are deletions, which can be represented by four bytes (the opcode and the operand and the fitness). Insertions and swaps require six bytes (the opcode, two operands, and the fitness). The expected data size sent from one participant to another after the first generation is thus \( 50 \times (4 \times \frac{1}{3} + 6 \times \frac{2}{3}) = 267 \) bytes. After the second generation, each of the \( d \) variants sent will typically have two edit operations in its history, follow the same distribution (and thus require twice as much bandwidth to communicate). Since each of the \( N \) networked participants send one batch of variants after every generation, the expected cumulative total network bandwidth used, as a function of the number of generations \( G \) before the repair is found, is estimated as:
\[
\text{bytes}\_\text{sent}(G) \approx \sum_{i=1}^{G} N \times d \times i \times (4 \times \frac{1}{3} + 6 \times \frac{2}{3})
\]
With our default parameters, this estimate implies 801 bytes sent per participant (spread over two network sends or six SMS messages per participant) by the end of the second generation. Empirically, we find that our measured results match this model to within 10%. For example, over eight random trials to repair merge with...
two participants averaging 2.4 generations, DGA sent a total of 7000 bytes in such a way that 51 SMS text messages were required. This implies that just under 875 bytes (or 6.4 SMS text messages) were required per node per trial. We claim that this low communication cost is well within what would be considered reasonable for the task of program repair across embedded or mobile systems.
To measure the impact of communication on run time we next evaluated wall-clock times for the DGA, the original serial algorithm, and an ideal naïve parallel adaptation of the original algorithm with no inter-node communication, all at the AST source code level. These runs were performed on a single server-class machine, with 4 3.0 GHz Intel Xeon CPUs and 15.6 GB of memory using TCP over Ethernet. The DGA incurs a cost from inter-node communication, but the results in Table 4 show that this cost is more than offset by the increased algorithmic efficiency of search space splitting and migration of high-fitness variants among the nodes.
Table 4 shows the mean wall clock time in seconds to find a repair for the DGA and a naïve parallel version of the original serial algorithm. We report results for one to four participating repair nodes (in practice all nodes were run in parallel on a single multi-core). The naïve algorithm running on a single node is exactly the original serial algorithm.
<table>
<thead>
<tr>
<th># Nodes</th>
<th>DGA Seconds</th>
<th>Rounds</th>
<th>Naïve Parallel Seconds</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>173.868</td>
<td>43.2</td>
<td>205.531</td>
</tr>
<tr>
<td>2</td>
<td>135.17</td>
<td>28.2</td>
<td>195.821</td>
</tr>
<tr>
<td>3</td>
<td>115.566</td>
<td>14.5</td>
<td>201.346</td>
</tr>
</tbody>
</table>
At each level of parallelization, the distributed algorithm finds repairs faster than the ideal naïve parallelization of the original algorithm. Judging the significance of these differences using the Kolomogorov-Smirnov test (used instead of a T-test because these distributions are not normal given the large differences between runs which do or do not find a repair) yields a value of 0.5 with \( p = 0.000214 \). The wall clock performance gained by using the distributed algorithm over a naïve parallel algorithm is statistically significant.
5. Related Work
**Genetic Algorithms and Evolution of Machine Code.** Previous work can be divided into two broad categories. One designs mutation operators to preserve the validity of the programs they manipulate, both over CISC instruction sets \[23\] running on hardware and more recently over Java bytecode \[13,25\] running on the Java virtual machine. An alternative is to use generic mutation operators, relying on safety mechanisms built into the CPU and operating system to catch and terminate invalid individuals \[19\].
Our work follows the second approach, using general operators that can be applied across both CISC x86 and RISC Arm architectures. Given the low cost of linking ASM individuals and writing ELF individuals to disk, we favor using the CPU to check validity, finding that it is adequate.
“Evolvability” analysis \[24\] has led some to declare x86 assembly code an unfit medium for evolutionary computation \[31\]. Based on this belief, translations between x86 and more evolvable intermediate languages have been proposed, both to construct evolvable malware \[7\] and to distribute binary patches \[1\]. Our results provide a counterexample, showing that EC at the ASM and ELF level can efficiently generate viable program variants. By operating on whole instructions, the negative effects of “argumented instructions” \[24\] are minimized. Similarly, through the use of nop padding in ELF level mutation, changes in the absolute offset are minimized, thus reducing the impact of direct addressing.
**Assembly Program Repair.** Schulte et al. provide the closest instance of related work proposing automated program repair for assembly programs \[27\]. This preliminary short paper describes EC-generated program repairs to x86 assembly programs, focusing only on x86 assembly as a lowest common denominator for high-level languages (such as C and Haskell). Here, we extended that work to target both ASM and ELF representations, multiple assembly languages (x86 and ARM), propose stochastic fault localization, demonstrate that our technique scales to embedded devices, and propose and demonstrate a novel distributed repair algorithm.
**Executable Program Repair.** The closest instance of related work that automatically repairs binary executables is the Clearview system \[26\], which patches errors in deployed Windows x86 binaries. Clearview uses a split-phase learning-and-monitoring approach—program instrumentation is used to learn invariants, and later monitoring notices violations in deployed programs. If a violation occurs, Clearview considers possible patches, evaluating them against an indicative workload of test cases. Clearview focuses on specific error types and prespecifies a set of repair templates (e.g., breaking out of a loop, clamping a variable to a value, etc.). It is therefore less general than our method, although it has obtained impressive results in its domain, patching nine out of twelve historical Firefox vulnerabilities, even in the face of a DARPA Red Team. However, the Clearview approach seems heavy weight; they report experiments involving rack-mount server machines with 16 GB of RAM, VMware virtualization, and a 300× slowdown during the learning phase \[26 Sec. 4.4\]. By contrast, our work targets resource-constrained embedded environments and uses stochastic sampling to reduce the cost of fault localization, which does not require program instrumentation.
A distributed repair technique has also been described for Clearview which protects application communities, but that work did not report performance results \[26\]. Their distributed algorithm amortizes the cost of learning invariants (i.e., of computing fault localization information), and ignores the time to find a candidate repair. By contrast we demonstrate a distributed algorithm that finds repairs more quickly.
**Distributed Evolutionary Algorithms.** The large literature on distributed and parallel genetic algorithms dates back to Grosso \[11\], who explored the idea of subdividing a GA population into smaller subpopulations with occasional exchanges of fit individuals among the populations. More recent work ranges from implementations tailored to particular hardware configurations \[12,22\] to a wide variety of algorithms in which the population is partitioned and individuals are shared among the partitions according to different schemes, e.g., \[12\]. Parameter tuning (population size, migration rate, etc.) is a concern, with recommendations available for several problem types \[6\]. Perhaps most relevant to the current work is a Doctoral Colloquium describing a distributed genetic programming implementation on a wireless sensor network \[32\], although that work is still quite preliminary.
6. Discussion
Compared to automated program repair over C statements, an assembly representation operates over a finite alphabet of elements. A typical assembly instruction consists of an opcode and two or three
operands, while C statements may be of arbitrary size and complexity (e.g., $x = 1 + 1 + \ldots$). In addition, there are typically at least three times more assembly instructions than C statements. These combined facts give an assembly representation limited to permutations of elements of the original program much higher sample rates in the space of possible programs. In a system where the linking of an assembly file does not introduce new instructions or arguments, the alphabet and expressive power of the ASM and ELF representations are equivalent.
We see the effects of this increased coverage in Section 3.4 in which the program is only repaired at the ASM and ELF levels, and in Section 4.3 in which some repairs are much more easily expressed at the ASM and ELF levels.
While introducing new bugs is a possibility, We typically find very small changes which add only the buggy behavior. Specifically we reviewed the repairs presented here and found no evidence of introduced bugs.
While moving from the tree AST representation to the vector ASM and ELF representations may seem minor, the EC community has two separate sub-fields, GP and GA, dedicated to the study of tree- and vector-based representations respectively, each with their own research challenges (e.g., bloat in GP), application domains, best practices, journals and conferences.
There were several technical challenges in the implementation, particularly regarding the manipulation of ELF files. Existing tools such as the GNU ELF tool suite (`libelf`) and its BSD equivalent (`libfd`) do not support changes to the contents of existing ELF files. We thus developed our own libraries for manipulating ELF files, including support for the automated updates to ELF file metadata in response to an altered `.text` section.
Although ELF files do support symbolic addressing through symbol names and run-time linkers, direct addresses pose a significant problem for the randomly changing raw binary code sections. This is mitigated by mutation operators that minimize disruption to the location of compiled code.
It is sometimes useful to consider the total number of unique repairs produced. For example, additional candidate repairs help developers create high-quality final patches [34]. Because our DGA explicitly manages diversity across sub-populations, we hypothesize that it might produce a wider variety of distinct repairs. We measured uniqueness in terms of changes made to the code: Two repairs are distinct if they use edit operations, treated as unordered sets, that are not identical. With four participants, our DGA found 20% more unique repairs than with one participant. If we consider only the challenging `flex`, `indent`, `merge` and `units` repairs, the number of discovered unique repairs increases to 73%.
Some may argue that it is aesthetically unappealing to modify code at all, particularly with a stochastic algorithm such as EC. We believe that distributed automated repair methods will be necessary in the future, as embedded devices become ubiquitous and are deployed in a wide range of environments. As the computational power of distributed embedded devices eclipses that of centralized servers, timely centralized testing and repair becomes infeasible (c.f. already 28–29 day lag times are reported in recent surveys for centralized repairs [39]).
There are several areas of potential future work. For example, we expect that variations of this technique could be used to optimize performance of programs for specific environments. A second area is proactive diversity to disrupt software monocultures. In security settings, randomization is often inserted into compiled programs to prevent malicious attacks. Assembly code evolution could be used to add diversity to deployed software.
6.1 Limitations and Caveats
The fine granularity of repairs at the ASM and ELF levels may be a poor match for conventional test suites. For example, we have observed ASM-level repairs that change the calling convention of one particular function. Such a repair has no direct representation at the C source level, and a test suite designed to maximize statement coverage (for example) may not speak to the validity of such a repair. Producing efficient test suites that give confidence that an implementation adheres to its specification remains an open problem in software engineering. Our work shares this general weakness with all other approaches that use test suites or workloads to validate candidate repairs (e.g., Clearview [26] and GenProg [35]). In this regard, sandboxing is crucial: we have observed ASM variants that subvert the testing framework by deleting key test files, leading to perfect fitness for all subsequent variants until the test framework is repaired.
Benchmark selection is a threat to the external validity of our experiments. We used benchmarks taken from published papers to admit direct comparison; to mitigate this threat we augmented the benchmark set with very high test coverage and non C-language examples.
Our DGA uses a self-contained compact encoding of each variant, to facilitate communication among nodes via SMS. This encoding precluded the use of crossover because it would not be meaningful to exchange information between two variants under the encoding, even though crossover is an important feature of many GAs. This restriction implies that our results might not generalize to other GAs. Since crossover can improve search time and success rates, it is possible that our results would be improved with a DGA implementation that supports crossover. This could be tested using a recently published patch representation [20] that supports a concise encoding of crossover.
7. Summary and Conclusion
This paper extends previous work on automated program repair at the AST level to compiled (ASM) and linked (ELF) programs. The new representations allow repairs when source code cannot be parsed into ASTs (e.g., due to unavailable source files, complex build procedures or non-C source languages). They also reduce memory and disk requirements sufficiently to enable repairs on resource constrained devices. We also introduce a stochastic fault localization technique, which is applicable to these representations and devices, and present a distributed repair algorithm that allows costly repair processes to be split across multiple devices.
Importantly for embedded devices, our techniques reduce memory requirements by up to 85%, disk space requirements by up to 95% (Section 4.4), and repair generation time up to 62% (Section 4.4), which enables application to resource-constrained environments. We demonstrate our technique on Nokia N900 smartphones whose resource constraints which serve as a practical proxy for future low power computing systems.
Our fault localization algorithm is based on stochastic sampling and Gaussian convolution. It provides the instruction- and byte-level precision required by the ASM and ELF representations, while retaining sufficient accuracy to guide automated repair. In addition, it is ten times faster than previous approaches and more suited to devices where direct instrumentation is infeasible.
We take advantage of these reduced resource requirements in a distributed repair algorithm in which multiple cell phones communicate via SMS messages to find repairs more quickly. Sections 3.8 and 3.5 detail the algorithm. Using four devices we increase success rates by 13% and reduce fitness evaluation burdens by a factor of five—a superlinear improvement over naive parallelism.
The distributed algorithm’s use of multiple populations could also
---
Footnote:
be used to speed up serial repair on a single device. Communication costs are low: two phones require under 900 bytes (or 7 SMS messages) per participant per repair on our benchmarks.
Taken together, these techniques constitute the first general automated method of program repair applicable to binary executables, and are a first step in the application of automated software repair to the growing field of mobile and embedded devices.
7.1 Acknowledgments
C. Le Goues, N. Hollschulte, R. Bodik, and the anonymous reviewers provided helpful comments on the manuscript. The work was supported by DARPA (P-1070-113237), DOE (DE-AC02-05CH11231), NSF (SHF-0905236, SHF-0905373, CCF-0954024) and the Santa Fe Institute.
References
|
{"Source-Url": "http://www.cs.unm.edu/~forrest/publications/asplos-2013.pdf", "len_cl100k_base": 13680, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40002, "total-output-tokens": 15338, "length": "2e13", "weborganizer": {"__label__adult": 0.00043487548828125, "__label__art_design": 0.0003383159637451172, "__label__crime_law": 0.0003478527069091797, "__label__education_jobs": 0.0006113052368164062, "__label__entertainment": 7.68899917602539e-05, "__label__fashion_beauty": 0.00020694732666015625, "__label__finance_business": 0.0002570152282714844, "__label__food_dining": 0.00034689903259277344, "__label__games": 0.0008263587951660156, "__label__hardware": 0.0028133392333984375, "__label__health": 0.0005483627319335938, "__label__history": 0.00031876564025878906, "__label__home_hobbies": 0.0001348257064819336, "__label__industrial": 0.0005331039428710938, "__label__literature": 0.00027108192443847656, "__label__politics": 0.00026702880859375, "__label__religion": 0.0005130767822265625, "__label__science_tech": 0.050048828125, "__label__social_life": 7.88569450378418e-05, "__label__software": 0.006130218505859375, "__label__software_dev": 0.93359375, "__label__sports_fitness": 0.0003783702850341797, "__label__transportation": 0.0008111000061035156, "__label__travel": 0.00021457672119140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64621, 0.04323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64621, 0.27985]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64621, 0.88855]], "google_gemma-3-12b-it_contains_pii": [[0, 5274, false], [5274, 12335, null], [12335, 18389, null], [18389, 22429, null], [22429, 28578, null], [28578, 34223, null], [34223, 39433, null], [39433, 46199, null], [46199, 53425, null], [53425, 61252, null], [61252, 64621, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5274, true], [5274, 12335, null], [12335, 18389, null], [18389, 22429, null], [22429, 28578, null], [28578, 34223, null], [34223, 39433, null], [39433, 46199, null], [46199, 53425, null], [53425, 61252, null], [61252, 64621, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64621, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64621, null]], "pdf_page_numbers": [[0, 5274, 1], [5274, 12335, 2], [12335, 18389, 3], [18389, 22429, 4], [22429, 28578, 5], [28578, 34223, 6], [34223, 39433, 7], [39433, 46199, 8], [46199, 53425, 9], [53425, 61252, 10], [61252, 64621, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64621, 0.10557]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
83773a2de7bd4d67bd220cd4603792cb4a6bf3a0
|
Automatic Generation of Schedulings for Improving the Test Coverage of Systems-on-a-Chip
Claude Helmstetter, Florence Maraninchi, Laurent Maillet-Contoz, Matthieu Moy
To cite this version:
HAL Id: hal-00311006
https://hal.archives-ouvertes.fr/hal-00311006
Submitted on 12 Aug 2008
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Automatic Generation of Schedulings for Improving the Test Coverage of Systems-on-a-Chip
C. Helmstetter∗†, F. Maraninchi∗, L. Maillet-Contoz† and M. Moy∗
∗Verimag, Centre équation - 2, avenue de Vignate,
38610 GIÈRES — France
†STMicroelectronics, HPC, System Platform Group.
850 rue Jean Monnet, 38920 CROLLES — France
Abstract—SystemC is becoming a de-facto standard for the early simulation of Systems-on-a-chip (SoCs). It is a parallel language with a scheduler. Testing a SoC written in SystemC implies that we execute it, for some well chosen data. We are bound to use a particular deterministic implementation of the scheduler, whose specification is non-deterministic. Consequently, we may fail to discover bugs that would have appeared using another valid implementation of the scheduler. Current methods for testing SoCs concentrate on the generation of the inputs, and do not address this problem at all. We assume that the selection of relevant data is already done, and we generate several schedulings allowed by the scheduler specification. We use dynamic partial-order reduction techniques to avoid the generation of two schedulings that have the same effect on the system’s behavior. Exploring alternative schedulings during testing is a way of guaranteeing that the SoC description, and in particular the embedded software, is scheduler-independent, hence more robust. The technique extends to the exploration of other non-fully specified aspects of SoC descriptions, like timing.
I. INTRODUCTION
The Register Transfer Level (RTL) used to be the entry point of the design flow of hardware systems, but the simulation environments for such models do not scale up well. Developing and debugging embedded software for these low level models before getting the physical chip from the factory is no longer possible at a reasonable cost. New abstraction levels, such as the Transaction Level Model (TLM) [1], have emerged. The TLM approach uses a component-based approach, in which hardware blocks are modules communicating with so-called transactions. The TLM models are used for early development of the embedded software, because the high level of abstraction allows a fast simulation. This new abstraction level comes with new synchronization mechanisms which often make existing methods for RTL validation inapplicable. In particular, recent TLM models do not have clock anymore.
SystemC is a C++ library used for the description of SoCs at different levels of abstraction, from cycle accurate to purely functional models. It comes with a simulation environment, and is becoming a de facto standard. As TLM models appear first in the design flow, they become reference models for SoCs. In particular, the software that is validated with the TLM model should remain unchanged in the final SoC. Here, we concentrate on testing methods for SoCs written in SystemC.
The current industrial methodology for testing SoCs in SystemC is the following. First, we identify what we want to test (the System Under Test, or SUT), which is usually an open system. We make it closed by plugging input generators and a result checker, called oracle. SCV [2] is a testing tool for SystemC. It helps in writing input generators by providing C++ macros for expressing constraints:
\[
\text{SCV\_CONSTRAINT}((\text{addr}()>10 \&\& \text{addr}()<50) ||
(\text{addr}()>=2 \&\& \text{addr}()<=5))
\]
is an SCV constraint that will generate random values of \text{addr}. In most existing approaches, the SUT writes in memory, and the oracle consists in comparing the final state of the SUT memory to a reference memory. As usual, the main difficulty is to get a good quality test suite, i.e., a test suite that does not omit useful tests (that may reveal a bug) and at the same time avoids redundant tests (that can expose the same bugs) as much as possible. Specman [3] is a commercial alternative of SCV which uses the e language for describing the constraints.
Contributions and Structure of the paper: We assume that the choice of relevant data for the testing phase has already been done: we consider a SoC written in SystemC, including the data generator and the oracle. For each of the test data, the system has to be run, necessarily with a particular implementation of the scheduler. Since the specification of the scheduler is non-deterministic, this means that the execution of tests may hide bugs that would have appeared with another valid implementation of the scheduler. Moreover, the scheduling is due to the simulation engine only, and is unlikely to represent anything concrete on the final SoC where we have true parallelism. We would like the SoC description, and in particular the embedded software, to be scheduler-independent. Exploring alternative schedulings is a way of validating this property.
We present an automatic technique for the exploration of schedulings in the case of SystemC. It is an adaptation and application of the method for dynamic partial order reduction presented in [4]. This method allows to explore efficiently the states of a system made of parallel processes (given as object code) that execute on a preemptive OS and synchronize with a lock mechanism. We show here that it can be applied to SystemC too. Adaptations are needed because: the SystemC scheduler is not preemptive; SystemC programs use non-persistent event notifications instead of locks; evaluation phases alternate with update phases; an eligible process cannot be disabled by another one.
Our tool is based on forking executions: we start executing the system for a given data-input, and as soon as we suspect that several scheduler choices could cause distinct behaviors, we fork the execution. We use an approximate criterion to decide whether to fork executions. The idea is to look at the actions performed by the processes, in order to guess whether a change in their order (as what would be produced by distinct scheduler choices) could affect the final state. This criterion is approximate in the following sense: we may distinguish between executions that in fact lead to the same final state; but we cannot consider as equivalent two executions that lead to distinct final states. The result is a complete, but not always minimal, exploration of the scheduling choices for the whole data-input.
The paper is structured as follows: section II presents an overview of SystemC. Section III is the formal setting; Section IV explains the algorithms and section V proves the properties of the method. We present our implementation and evaluate it in section VI, related work in section VII, and we conclude with section VIII.
II. SystemC and the Scheduling Problems
A TLM model written in SystemC is based on an architecture, i.e. a set of components and connections between them. Components behave in parallel. Each component has typed connection ports, and its behavior is given by a set of communicating processes that can be programmed in full C++. For managing the set of concurrent processes that appear in the components, SystemC provides a scheduler, and several synchronization mechanisms: the low-level events, the synchronous signals that trigger an event when their value changes, and higher level, user-defined mechanisms based on abstract communication channels.

The static architecture is built by executing the so-called elaboration phase (ELAB), which creates components and connections. Then the scheduler starts running the processes of the components, according to the informal automaton of figure 2. Simulations of a SystemC model look like sequences of evaluation phases (EV). Signals update phase (UP) and time elapse (TE) separate them (see figure 1).
A. The SystemC Scheduler
According to the SystemC Language Reference Manual [5], the scheduler must behave as follows. At the end of the elaboration phase ELAB, some processes are eligible, some others are waiting. During the evaluation phase EV, eligible processes are run in an unspecified order, non-preemptively, and explicitly suspend themselves when reaching a wait instruction. There are two kinds of wait instructions: a process may wait for some time to elapse, or for an event to occur.
While running, it may access shared variables and signals, enable other processes by notifying events, or program delayed notifications. An eligible process cannot become “waiting” without being executed. When there is no more eligible process, signal values are updated (UP) and δ-delayed notifications are triggered, which can wake up processes. A δ-cycle is the duration between two update phases. Since there is no interaction between processes during the update phase, the order of the updates has no consequence. When there is still no eligible process at the end of an update phase, the scheduler lets time elapse (TE), and awakes the processes that have the earliest deadline. A notification of a SystemC event can be immediate, δ-delayed or time-delayed. Processes can thus be become eligible at any of the three steps EV, UP or TE.
B. Examples
```c++
void top::A() {
e.notify();
wait(20,SC_NS);
if (x) cout << "Ok\n";
else cout << "Ko\n";
x = 1;
}
```
To illustrate possible consequences of scheduling choices, let us introduce two small examples of SystemC programs. Figure 3 shows the example `foo` made of two processes A and B. It has three possible executions according to the chosen scheduling, leading to very different results:
- A:B:A:[TE]:B:A: This scheduling leads to the printing of the string “Ok”.
- A:B:A:[TE]:A:B: The string “Ko” is printed. It is a typical case of data-race: x is tested before it has been set to 1.
- B:A:[TE]:B: The execution ends after three steps only. The “wait (e)” statement has been executed before any notification of the string e. Since events are not persistent in SystemC, process A has not been woken up. It is a particular form of deadlock.

void top::A() {
as in example foo
sc_time T(20, SC_NS);
wait(T);
}
void top::C() {
as in example foo
}
Fig. 4. The foo bar example
It is useful to test all executions of the foo example because they lead to different final states. But consider now the foo bar example defined in figure 4. foo bar has 30 possible executions, but only 3 different final states. 12 executions are equivalent to “C;A;B;A;TE;C;B;A”, 12 to “C;A;B;A;TE;C;A;B” and 6 to “C;B;A;TE;C;B”. The method we present generates only 3 executions, one for each final state (or equivalence class).
In general testing techniques, the idea of generating one representative in each class of an equivalence relation is called partition-based testing [6]. It is not always formally defined.
C. Communication Actions
We call communication actions all actions that affect or use a shared object. We consider only two kinds of shared objects: events and variables. All other synchronization structures can be modeled using these two primitives.
There are two operations on events: wait and notify; and two operations on variables: read and write. In the sequel we will distinguish caught notifications (those that have woken up a process) from missed notifications, and writes that have modified the current value from non-modifying ones. Of course, these distinctions can only be done dynamically in the general case.
III. FORMAL SETTING
We will now explain how we generate schedulings for multi-threaded models written in SystemC. In the whole section, the SUT is a SystemC program. We suppose that we have an independent tool for generating test cases that only contain the data. We call SUTD the object made of the SUT plus one particular test data1. We have to generate a relevant set of schedulings for this data.
Most of the definitions in this section are quite standard in the literature on partial order reduction techniques.
A. Representation of the SUTD
When data is fixed, a SUT execution is entirely defined by its scheduling; a scheduling is entirely defined by an element of \( (P \cup \{\delta, \chi\})^\ast \) where \( P \) is a process identifier and \( \delta, \chi \) are special symbols used to mark the \( \delta \)-cycle changes and time elapses respectively. We consider full states of a SUTD to be full dumps of the SUTD memory, including the position in the code of each process. The SUTD can be seen as a function from the schedulings to the full states. It is partial: not all the elements of \( (P \cup \{\delta, \chi\})^\ast \) represent possible schedulings of the SUTD (because of the synchronization constraints between processes).
1 Strictly speaking, the SUT includes a data generator, not a single piece of data. But the generator does not depend on the scheduling, hence the distinction is not necessary here.
Definition 1 (Schedulings): Let \( M \) be a SUTD. \( P_M \) is the set of its processes; \( S_M \) is the set of its reachable full states; \( F_M : (P_M \cup \{\delta, \chi\})^\ast \rightarrow S_M \) is its associated function. \( F_M \) is partial. A scheduling is an element of \((P_M \cup \{\delta, \chi\})^\ast\); a valid scheduling is an element of the definition domain of \( F_M : D_{F_M} \subset (P_M \cup \{\delta, \chi\})^\ast \).
For the programs of Section II-B, we have: \( D_{F_{\text{foo}}} = \{AB\chi BA, ABA\chi AB, BA\chi B\} \) and \( F_{\text{foo}}(ABC) = F_{\text{foo}}(ACB) = F_{\text{foo}}(CAB) \).
Definition 2 (Transitions): A transition is one execution of one process in a particular scheduling. Each transition of a scheduling is identified by its process identifier indexed by the occurrence number of this process identifier in the scheduling. For example, in the scheduling \( ppq \) there are 3 transitions: \( p_1 \), \( q_1 \) and \( p_2 \), in that order.
Definition 3 (Permutations): Let \( u = w_p q w_q \) be a valid scheduling where the transition \( p_1 \) (resp. \( q_1 \)) corresponds to the \( i \)-th (resp. \( j \)-th) execution of process \( p \) (resp. \( q \)). Permuting the transitions \( p_i \) and \( q_j \) means generating a new valid scheduling \( u' \) such that \( u' \) begins by \( v \) and the \( j \)-th transition of \( q \) in \( u' \) is before the \( i \)-th transition of \( p \); there exists \( x, y, z \) such that \( u' = vxq_jypz \). \( u' \) is called a permutation of \( p_i \) and \( q_j \).
We will use letters \( p, q, r \) to denote processes, \( a, b, c, \ldots \) to denote transitions and \( u, v, \ldots \) to denote sub-sequences of schedulings. Indexes will be omitted when obvious by context. An equivalence on the set of schedulings is needed to determine whether two schedulings lead to the same final state. We first define the relation \( \sim \):
\[
\forall uabv \in D_{F_M}, uabv \sim ubav \Leftrightarrow (ubav \in D_{F_M} \land F_M(ubav) = F_M(uabv))
\]
Definition 4 (Equivalence of Schedulings): The equivalence of schedulings is the reflexive and transitive closure of the relation \( \sim \). It is noted \( \equiv \).
This definition complies with the property: \( \forall u, v \in D_{F_M}, u \equiv v \Rightarrow F(u) = F(v) \). Therefore, if we generate one element of each equivalence class of \( \equiv \), we will have all possible final states. It allows to detect all property violation as soon as the corresponding output checker has been included into the SUT and drives it to a special final state when it detects an error.
B. Transition Dependency and Permutation Choice
We produce alternative schedulings by permuting some transitions of a given scheduling, but only when this can lead to a non-equivalent scheduling. For example, suppose that we are executing a SUTD and we have just executed the process \( p \) and then the process \( q \) (\( u = u_1p_1q_1 \)). If there is no causal reason why the transition \( q_1 \) was after the transition \( p_1 \) (process \( q \) was not waiting for an event notified in \( p_i \)), then we can permute these two transitions. In that case, executing \( q \) instead of \( p \) in the state \( F_M(u_1) \) can be a divergent path as illustrated on figure 5. The question we have to answer is: “Do these two schedulings lead to the same state?” or formally: “\( F_M(u_1pq) = F_M(u_1qp) \)?”. Note that we may not be able to prove that \( F_M(u_1pq) = F_M(u_1qp) \) because we want to answer this question without executing \( u_1qp \) entirely. Hence we rely on the common objects accessed by the transitions to
guess whether a permutation has some effect on the final state. This is incomplete. If we cannot prove that the final states are equal, we generate the new scheduling.

The theory of partial order reduction relies on the definition of \[ D \text{ dependent} \] transitions. In our work, we define the dynamic dependency graph (DDG) that represents the scheduling \( abaxba \) of the `for` program of figure 3. Each horizontal line is a process. New cycles \((\delta \text{ or } \chi)\) are represented by vertical lines. Each box is a process transition. Dashed arrows (resp. plain lines) between boxes indicate that the two transitions are dependent but not permutable (resp. non-commutative). We may move some transitions on the horizontal axis, remaining among the valid and equivalent schedulings, provided we do not permute two boxes linked by an arrow or line.
**Definition 5 (Permutability):** The transitions \( a \text{ and } b \) are causally permutable in the valid scheduling \( u_1av_2bu_3 \), noted \( (a,b) \in P \), if and only if: \( \{u_1v_1ba \in D_{F_{xu}} | \exists v_2, u_1v_1abv_2 \equiv u_1au_2bu_3 \} = \emptyset \). In other words, two transitions are not permutable if:
1) there is an equivalent scheduling in which they are consecutive;
2) the second transition \( b \) can be elected in place of the first transition \( a \) in this equivalent scheduling.
**Definition 6 (Commutativity of Transitions):** The non-causally ordered transitions \( a \text{ and } b \) are commutative in the valid scheduling \( u_1av_2bu_3 \) if and only if: \( \forall u_1v_1abv_2 \equiv u_1au_2bu_3, u_1v_1abv_2 \equiv u_1v_1abv_2 \).
**Commutativity** is not defined for causally ordered transitions.
The theory of partial order reduction relies on the definition of dependent transitions [7]. In our work, we define the dependency relationship \( D \) as follows:
**Definition 7 (Dependency of Transitions):** The transitions \( a \text{ and } b \) are dependent if and only if they are not permutable, or permutable but not commutative.
The **causal order** specifies which transitions can be permuted in a particular scheduling without permuting dependent transitions, including themselves. All schedulings of the same equivalence class have the same causal order. Unlike the permutability relationship, the causal order is a partial order.
**Definition 8 (Causal Order):** The transitions \( a \text{ and } b \) are causally ordered in the valid scheduling \( u = u_1av_2bu_3 \), noted \( a \prec u b \), if and only if \( (a,b) \in \text{transitive closure of} \ \{ (x,y) \in D | x < u y \} \).
**IV. Algorithms**
**A. Computation of the Commutativity Relationship**
The first step is to detect pairs of transitions which are not commutative. We compute here a relationship \( C \) for all pairs of transitions. This computed relationship is correct for permutable transitions, which is sufficient for our problem. Two transitions may be non-commutative \( ((a,b) \notin C) \) only if they contain non-commutative communication actions on the same shared object (see section II-C). Note that the order of these actions within a transition is irrelevant. We examine all cases below.
For shared variables there are three cases of non-commutative actions (since operations on variables have no effect on process eligibility, we just need to check whether the equality of resulting states is still verified after permutation):
1) a read followed by a modifying write
2) a modifying write followed by a read
3) a write followed by a modifying write
In all other cases, the transitions are commutative, as in example 2. Note that the nature of a write depends on the scheduling we consider. A modifying write can become a non-modifying write for another scheduling, and reciprocally.
**Example 1:** Variable \( x \) initially set to 0. The first transition executes the action \( x = x + 2 \). The second executes \( x = 4 - x \). It is a modifying write followed by a read so we consider that the two transitions are not commutative (point 2 above).
**Example 2:** Variable \( x \) initially set to 2. The first transition executes the action \( x = 4 \). The second also executes this instruction. It is a modifying write followed by a non-modifying write.
Note that \( C \) is symmetric, which may not be obvious from point 3 above. But permuting a modifying write with a non-modifying write is still a modifying write followed by a non-modifying write, except if there is another pair of dependent actions. Example 2 also illustrates this remark.
For events, there are three cases of non-commutative actions:
1) a notification followed by a wait
2) a wait followed by a notification
3) a caught notification followed by a notification
The dependency between a wait and a notify is quite obvious: if the wait comes first, then the corresponding process is woken up by the notify, otherwise it remains sleeping. Example 3 illustrates the third case.
**Example 3:** Suppose one runs this three-process model:
- Initial state: process A waiting for e, B and C eligible.
- Process A: `cout <<'a' << x = 1;`
• Process B: cout <<=’b’; x = 2; e.notify();
• Process C: cout <<=’c’; e.notify();
There is exactly one transition per process, noted a, b and c. Four schedulings are valid: bac, bca, eba and cab. In bac and bca, b is dependent with a (2 modifying writes) but they are causally ordered (process A was enabled by the transition b). However if we permute b and c, b is no longer causally ordered with a since A was enabled by c instead of b.
Permuting two notifications of an event does not modify the resulting state of the SUTD, but modifies the computed causal order. That’s why they are considered as non-commutative.
B. Computation of the Causal Partial Order
In order to compute the permutability, we need to compute the causal order \( \prec \). We denote prec\((u)\) the set \( \{a, b \in u | a \prec b \} \) obtained after the execution of the scheduling \( u \).
We compute the causal order step by step. Obviously, for the empty scheduling we have prec\((\epsilon)\) = \( \emptyset \). Let a and b be two transitions, we have \( a \prec b \) and so \( (a, b) \in D \) at least in the three following cases:
• a or b indicate a new \( \delta \)-cycle or time-elapsed.
• a and b belong to the same process (by definition)
• the process of transition b has been woken up by a.
In these cases, we note: \( a \prec_{\delta} b \). The rest of the paragraph below is adapted from [4]. Having prec\((u)\), we compute prec\((ub)\) as follows:
\[
\text{prec}_1(ub) = \text{prec}(u) \cup \{a \prec_{\delta} b | a \in u \}
\]
\[
\text{prec}_2(ub) = \text{prec}_1(ub) \cup \{(a, b) \notin C | a \in u \}
\]
\[
\text{prec}(ub) = \text{transitive closure of prec}_2(ub)
\]
Finally, we have \( (a, b) \in P \) in \( u_1u_2u_3u_4 \) if and only if: \( (a, b) \in \text{transitive closure of prec}_1(ub) \).
The following property is useful to optimize the implementation: Let \( u_1u_2u_3u_4 \) be a scheduling. Then process\((a) \equiv \text{process}(b) \land b \prec_c \Rightarrow a \prec_c \). Owing to this property, we can represent the causal order with an array \( T \) of size \( p \times s \) where \( s \) is the number of steps and \( p \) is the number of processes. The element \( T[a, q] \) is the last transition of process \( q \) which is causally before \( a \); i.e.: \( a \prec b \Leftrightarrow \text{num}(a) \leq T[b, \text{process}(a)] \).
Some other optimizations are well explained in [4].
C. Generation of one alternative scheduling
We are now able to determine if two transitions are not commutative (hence should be permuted). Now we explain how we treat such a pair of transitions. Let \( uavb \) be a scheduling such that \( (a, b) \in D \cap P \). Let \( v = v_1 \ldots v_n \) where \( v_1, \ldots, v_n \) are transitions. The goal is to generate a new valid scheduling with \( b \) before \( a \). We proceed as follows:
The first part \( u \) is unmodified.
We execute all \( v_i \) such that \( a \neq v_i \).
We execute \( b \) and then \( a \) (unlike some other concurrent languages, \( b \) cannot disable \( a \) in SystemC).
Then, since two dependent transitions have been permuted, we do not know whether the non-executed transitions \( v_i \) such that \( a \prec v_i \) are still defined. We are then free to choose the rest of the scheduling.
D. Generation of a full schedulings suite
We start by executing the SUTD with a random scheduling. In parallel with the SUTD execution, we run a checker:
• the checker computes the causal partial order \( \prec \) and builds the Dynamic Dependency Graph.
• if it discovers two non-commutative transitions \( p_i \) and \( q_j \) with \( p_i \) before \( q_j \):
- it generates a new scheduling such that \( q_j \) before \( p_i \) by permuting the transitions with the algorithm described above; the constraint \( \prec \) \( q_j \) before \( p_i \) is saved with the new scheduling to prevent further permutations of the same transitions.
- it continues the current execution, adding the opposite constraint \( \prec \) \( p_i \) before \( q_j \) to all of its further children.
Then we replay the SUTD with each generated scheduling \( u \). When we reach the end of \( u \), we continue the SUTD execution with a random scheduling. In parallel, we compute the causal order and generate new schedulings for each non-commutative pair of transitions, as for the previous schedulings. Thanks to the constraints saved with the generated schedulings, each new generated scheduling is more constrained than its father scheduling and so there are fewer and fewer new schedulings at each iteration. When the checker does not generate any new scheduling, we have a complete test suite.

Fig. 7. First iteration of the analysis for the e00 example. The first execution activates processes \( A \) and \( B \) in the order \( ABAAB \). The checker generates two new schedulings. One to permute \( A_1 \) and \( B_1 \) (unordered accesses to shared variable \( x \)).
V. PROPERTIES
The algorithm guarantees that we generate at least one element of each equivalence class (for the equivalence of definition 4).
**Theorem 1:** Let \( G_M \) be the set of all generated schedulings of a model \( M \). For any scheduling \( u \in D_{FM} \), there exists a scheduling \( v \in G_M \) such that \( u \equiv v \).
There are two useful and direct corollaries. First, if a local process state is present in a scheduling of \( D_{FM} \), it is also present in a scheduling of \( G_M \). Furthermore, we generate all the final states, including all deadlocks.
To prove the property, we need the definition of \( \equiv \) -prefix and \( \equiv \) -dominant for schedulings, directly adapted from prefix and dominant properties of Mazurkiewicz traces [7].
**Definition 9:** Let \( p, d \in D_{FM} \) be two schedulings, \( p \) is an \( \equiv \) -prefix of \( d \) and \( d \) an \( \equiv \) -dominant of \( p \) if and only if there exists a scheduling \( u \in D_{FM} \) such that \( u \equiv d \) and \( p \) is a string-prefix of \( u \).
Proof: We proceed by contradiction, and assume that there exists a scheduling \( u \in D_{FM} \) which breaks the property. We can write \( u \) in the form \( u = u_1a u_2 \) where \( u_1 \) is the longest prefix of \( u \) such that:
\[
\exists u_1 u_2 \in D_{FM} \text{ and } v \in G \text{ such that } u_1 u_2 \equiv v
\]
This decomposition is unique so we just have to prove that \( u_1 a \) has an \( \equiv \)-dominant in \( G \) to get the wanted contradiction.
Let \( v \in G \) be a generated completed scheduling such that \( u_1 u_2 \equiv v \). As a consequence, there exists a valid scheduling \( u_1 u_2 \) such that \( u_1 u_2 \equiv v \). If there is no non-determinism when we are in the state \( F_M(u_1) \), then we must have \( u_2 = aw_2^3 \) and so \( v \) would be a \( \equiv \)-dominant of \( u_1 a \).
Consequently \( a \) is neither \( \delta \) nor \( \chi \) and the process of \( a \) is defined and eligible in \( F_M(u) \). Since an eligible process cannot become “sleeping” without running, \( a \) is present in \( u_2 \) so \( u_2 = w_1 a w_2 \). Since \( a \) is eligible in \( F_M(u) \), it is not causally after any element of \( w_1 \). There are three cases:
- if \( w_1 \) is empty then we get the needed contradiction
- if \( w_1 = x b \) with \( b \) a then there exists another possible scheduling \( u_1 u_2'' \equiv v \) such that \( u_2'' = w_1^3 a b w_2 \) with \( w_1^3 \) shorter than \( w_1 \).
- if \( w_1 = x b \) with \( (b,a) \in D \) then:
- Transition \( b \) is before \( a \) in \( v \) but they are permutable.
- So we have generated a scheduling \( v' \) with \( a \) before \( b \), using the algorithm described in section IV-C.
- There exists a possible scheduling \( u_1 u_2'' \equiv v' \) such as \( u_2'' = w_1^3 a b w_2' \) with \( w_1^3 \) shorter than \( w_1 \).
- Consequently, by induction on the length of \( w_1 \), we get the needed contradiction.
VI. PROTOTYPE IMPLEMENTATION AND EVALUATION
A. The prototype
Figure 8 is an overview of the tool. The checker implements the checking algorithm of section IV-D. It has to be aware of all communication actions. Some of them can be detected by instrumenting the SystemC kernel, some other cannot (like accesses to a shared variable, that are invisible from the SystemC kernel). We choose to instrument the C++/SystemC source code. For each communication action in the code of a SystemC process, we add an instruction that notifies the operation to a global recorder. For example, consider the instruction \( x = y \) where \( x \) and \( y \) are shared variables. The two following instructions are added close to the assignment:
\[
\text{recorder->read}(\text{by}); \text{recorder->write}(\text{ax})
\]
Instrumentation is based on the open-source SystemC front-end Pinapa [8], and is compositional.
Another solution would have been to interpret or instrument the binaries. However, using a SystemC front-end has some benefits: it allows to generate a static dependency graph (SDG) which represents a superset of the communications that can occur between processes (see Figure 9). Moreover, it is easier to link the observed behavior to the source code.
The instrumented SystemC program is compiled with a patched SystemC kernel. The patches are: 1) replacing the election algorithm of the SystemC scheduler by an interactive version, still complying with the SystemC specification;
2) adding code to record the communication actions that cannot be detected in the code of the processes, and their consequences (e.g., enabling of a process). When we execute the instrumented platform with the patched SystemC kernel, we can detect dependencies dynamically or save a detailed trace and run the checker afterwards. In both cases, we get a list of new schedulings to be executed, and a record of the computed dependencies, usable as input for other checkers or visualization tools, like the production of the dynamic dependency graph (DDG).
B. Evaluation
In order to validate our tool and to evaluate the quality of the test suites produced, we studied several industrial SoC models. Assume that running one test-case takes some time \( T \). In order to cover the scheduling choices, we have to run more than one test-case. Let us denote \( V \) the number of valid schedulings, and \( G \) the number of schedulings generated by our tool. It is interesting to compare \( V \times T \) with \( G \times T + O \), where \( O \) is the overhead due to the computation of new schedulings.
With a real application, it is often difficult to evaluate \( V \). We chose to evaluate our method on three examples. First, we considered a SystemC encoding of the index problem presented in [4], because it is easy to evaluate \( V \). However, the indexer is not representative of the typical SystemC code found in industry. We then looked at two industrial case-studies: the first one has about 50 000 lines of code but only 4 processes, and it does not model a full SoC; the second one has about 250 000 lines of code and 57 processes, and it represents a full SoC.
1) The Indexer Example: There are \( n \) components and one global 128-element array used as a hash table. Each component is composed of 2 threads which communicate using a shared variable and a SystemC event. Each component writes 4 messages in the global hash table. This corresponds to schedulings of length \( 11 \times n \). For \( n \leq 11 \), there is no collision
\[\text{http://www-verimag.imag.fr/~helmstat/indexer.cpp}\]
in the hash table and all schedulings lead to the same final state. For \( n \geq 12 \) there are collisions hence non-equivalent schedulings. Our prototype generates valid schedulings leading to distinct states of the hash table. In this example, we generate exactly one scheduling per equivalence class. The number of generated schedulings is far smaller than the number of valid schedulings (at least \( 3.35E11 \) for \( n = 2 \), and \( 2.43E25 \) for \( n = 3 \)). Results are summarized in table I. Time is given only to help estimating the curve, not as an absolute measure.
<table>
<thead>
<tr>
<th>components</th>
<th>generated schedulings</th>
<th>time</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 . . . 11</td>
<td>1</td>
<td>( \leq 11 ) ms</td>
</tr>
<tr>
<td>12</td>
<td>8</td>
<td>60 ms</td>
</tr>
<tr>
<td>13</td>
<td>64</td>
<td>4 s</td>
</tr>
<tr>
<td>14</td>
<td>512</td>
<td>35 s</td>
</tr>
<tr>
<td>15</td>
<td>4096</td>
<td>5 mn</td>
</tr>
</tbody>
</table>
**TABLE I**
RESULTS FOR THE INDEXER EXAMPLE
2) The MPEG Decoder System: This system has 5 components: a master, a MPEG decoder, a display, a memory and a bus model. There are about 50 000 lines of code and only 4 processes. This is quite common in the more abstract models found in industry, because there is a lot of sequential code, and very few synchronizations. We added 340 instrumentation lines to detect communication actions.

The test is stopped after the third decoded image, which corresponds to 150 transitions. One simulation takes 0.39 s. Our tool generates **128 schedulings** in **1 mn 08 s**. No bug is found, which guarantees that this test-case will run correctly on any SystemC implementation. Running the model 128 times takes more time than generating the schedulings (we have \( G \times T = 128 \times 0.39 \approx 50 \) s and \( O \approx 1 \) mn \( 08 \) – \( 50 \) s \( \approx 18 \) s). Thus the overhead \( O \) remains acceptable.
On this example, we noticed that the number of generated schedulings could be improved. This MPEG decoder, as many other TLM models, uses a pair (event, variable) to implement a *persistent event* as follows (\( x \) is initially 0):
Process \( P \) runs: \( x=1; \ e\_notify(); \)
Process \( Q \) runs: \( if \ ( !x) \ wait(e); \ x=0; \)
The two valid schedulings \( P; Q \) and \( Q; P \); \( Q \) lead to the same final state, but our tool currently generates both schedulings because it cannot prove it. The intuition is that these schedulings are not equivalent according to the dependency relationship as computed in section IV. Detecting this kind of structures in the source code and taking them into account for the computation of the dependency relationship would allow to generate less schedulings.
3) A Complete SoC: Complete models of SoCs are typically 3 to 6 times bigger than the MPEG decoder. We are currently evaluating our tool on a model —let us call it XX— corresponding to a full SoC: it has about 250 000 lines of code and 57 processes. At the moment we are limited by the code instrumentation tool which still requires some manual work, so we looked at only one case study of this type, but the instrumentation tool will soon be fully automatic. For tests of length around 200 transitions, we expect the tool to behave well on XX: the ability to cope with this number of processes has been tested with the indexer example, and the ability to cope with the complexity of a large and realistic SystemC description has been tested with the MPEG example.
The interesting point with XX is the *granularity* of the transactions. With the MPEG decoder, the granularity corresponds to an algorithm that takes one line of the image at a time. Something interesting can be observed by a test oracle after 150 transitions only (three images have already been decoded). XX corresponds to an algorithm that takes one pixel of the image at a time. It may be the case that the test oracle has to observe thousands of transitions. XX is a very good case-study for observing the combined influence of the test length and the granularity on the performances of our technique. One phenomenon we can expect, and that we have to validate with the case-study, is the following: very abstract TLM descriptions have large-grain transactions, but loose synchronisations; while the more detailed TLM descriptions have finer-grain transactions, but stronger synchronizations. If the number of alternative schedulings decreases (because of stronger synchronizations) when the granularity of a description increases (and thus the length of the interesting test-cases), the method may still be applicable. We also comment on this point in the conclusion.
VII. RELATED WORK
Existing work (see, for instance [9]) addresses formal verification for TLM models. The idea is to extract a formal model from the SystemC code, and to translate it into the input format of some model-checker. In such an approach, the complete model that is model-checked has to include a representation of the scheduler. It is sufficient to use a non-deterministic representation that reflects the specification of SystemC, and then a property that is proved with this non-deterministic scheduler is indeed true for any deterministic implementation. Model-checking is likely to face the state-explosion problem, so testing methods are still useful. But we need the same guarantee on the results of the test being valid for any implementation of the simulation engine.
Partial order reduction techniques are quite old, but their *dynamic* extension is quite recent. As far as we know, it is not included in VERISOFT [10] yet. Partial order reduction is used in many model checkers for asynchronous concurrent programs such as Spin [11] or JAVA PATHFINDER [12]. However, since we use testing, our work is more related with tools which work directly on the program without abstractions,
such as VERISOFT or CMC [13]. The main difference is that our tool is adapted to the TLM SystemC constructs.
To get a complete validation environment, one need to include a test case generator and an output checker. For the latter, assertion-based verification [14] proposes to derive monitors from assertion languages. However, these languages are often based on the notion of clocks which are absent in TLM. If ABV is extended to TLM, it will become useful in our framework.
VIII. CONCLUSION AND FURTHER WORK
We presented a method to explore the set of valid schedulings of a SystemC program, for a given data input. This is necessary because the scheduling is a phenomenon due to the simulation engine only, and is unlikely to represent anything concrete on the final SoC. Exploring alternative schedulings during testing is a way of guaranteeing that the SoC description, and in particular the embedded software, is scheduler-independent, hence more robust. By using dynamic partial order reduction, we maximize the coverage and keep the number of tests as low as possible. Our tool also produces several graphical views that help in debugging SoCs. With the prototype tool, we have highlighted unwanted non-determinism in a bus arbiter for a transaction-accurate protocol. Also, some SoC descriptions are scheduler-dependent because they exploit the initial state of the most used implementation. In this case, covering the valid schedulings reveals deadlocks. Our tool is already mature enough to be used for industrial SystemC descriptions of SoCs.
There are at least two ways of improving the prototype performances. The first is to reduce the number of branches explored. A promising solution is to use partial state memorization. It is unrealistic to save all the states and compare the new state at each step due to the size and complexity of a SystemC model state. However, we can save some states and compare only particular new states. We plan to compare each forked execution every new delta-cycle. The second way is to reduce the time overhead needed for runtime checking. Some check results are predictable. Consequently doing static analysis before simulation can avoid runtime computation.
Further work on testing SoCs is threefold. First, the algorithm that fully explores alternative schedulings can be used on large platforms only if the length of the test is reasonable. A promising idea for very long tests is to use the method locally on the TLM description: a first execution of the whole platform is used to record the output transactions of some sub-system S of P. Then, our method is applied on a platform P’ obtained by substituting S’ with S in P. S’ is a sequential algorithm that plays the recorded transactions. It does not introduce scheduling choices. The idea is that the method then concentrates on the schedulings due to P−S, forgetting the schedulings due to S.
Second, the whole approach and the SystemC prototype is being adapted to the exploration of non-fully specified timings in the TLM models. Indeed, TLM models are not cycle-accurate, but people use to label them by approximate timing properties of the components, in order to estimate the timing properties of the SoC early. In this case, the timings should not be taken as fixed values. The embedded software will be more robust if it works correctly for slightly distinct timings. In the testing process, it is useful to explore alternative timings, with the same idea of generating only those timings that are likely to change the global behavior of the SoC. An overview of the method can be found in [15].
We also started working on efficient implementations of the SystemC simulation engine, by exploiting multi-processor machines. Here, the difficulty is to guarantee that a multi-processor simulation does not exhibit behaviors that are not allowed by the non-deterministic reference definition of the scheduler. The formal setting we described here is appropriate for defining the set of behaviors that the multi-processor simulation may produce, without changing the behavior of the embedded software.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00311006/file/Helmstetter_FMCAD06.pdf", "len_cl100k_base": 10707, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 34146, "total-output-tokens": 12486, "length": "2e13", "weborganizer": {"__label__adult": 0.0004763603210449219, "__label__art_design": 0.0006246566772460938, "__label__crime_law": 0.0004045963287353515, "__label__education_jobs": 0.0007228851318359375, "__label__entertainment": 9.906291961669922e-05, "__label__fashion_beauty": 0.0002491474151611328, "__label__finance_business": 0.0003764629364013672, "__label__food_dining": 0.00044465065002441406, "__label__games": 0.0011548995971679688, "__label__hardware": 0.006626129150390625, "__label__health": 0.0006213188171386719, "__label__history": 0.0004606246948242187, "__label__home_hobbies": 0.00020444393157958984, "__label__industrial": 0.0012073516845703125, "__label__literature": 0.0002796649932861328, "__label__politics": 0.00038242340087890625, "__label__religion": 0.00075531005859375, "__label__science_tech": 0.1722412109375, "__label__social_life": 8.487701416015625e-05, "__label__software": 0.00789642333984375, "__label__software_dev": 0.802734375, "__label__sports_fitness": 0.00045561790466308594, "__label__transportation": 0.0013256072998046875, "__label__travel": 0.0003063678741455078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47174, 0.01967]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47174, 0.46607]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47174, 0.89093]], "google_gemma-3-12b-it_contains_pii": [[0, 1147, false], [1147, 6655, null], [6655, 11152, null], [11152, 17661, null], [17661, 22899, null], [22899, 28965, null], [28965, 34499, null], [34499, 40416, null], [40416, 47174, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1147, true], [1147, 6655, null], [6655, 11152, null], [11152, 17661, null], [17661, 22899, null], [22899, 28965, null], [28965, 34499, null], [34499, 40416, null], [40416, 47174, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47174, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47174, null]], "pdf_page_numbers": [[0, 1147, 1], [1147, 6655, 2], [6655, 11152, 3], [11152, 17661, 4], [17661, 22899, 5], [22899, 28965, 6], [28965, 34499, 7], [34499, 40416, 8], [40416, 47174, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47174, 0.03004]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
033605e95122c923bbe4c68703aabc36ad1ced6d
|
Algorithms using Java for Spreadsheet Dependent Cell Recomputation*
Joe Francoeur
jfrancoe@mitre.org
December 3, 2002
Abstract
Java implementations of algorithms used by spreadsheets to automatically recompute the set of cells dependent on a changed cell are described using a mathematical model for spreadsheets based on graph theory. These solutions comprise part of a Java API that allows a client application to read, modify, and maintain spreadsheet data without using the spreadsheet application program that produced it. Features of the Java language that successfully improve the running time performance of the algorithms are also described.
1 Introduction
This paper describes algorithms for the recomputation of spreadsheet cells. The assumed context for such a recomputation occurs when a cell’s value is changed. In general, a cell is dependent on several others for its value as defined by its formula. Thus, to maintain the integrity of the spreadsheet, the reading of a cell value requires the recomputation of this cell once any of the cells on which it depends has changed.
The algorithms of this paper form the basis of ExcelComp [9], a Java application program interface (API) written by the author that allows the client application to read a specially formatted Microsoft Excel [4]
---
*This work was supported by US Army CECOM Contract DAAB07-01-C-C201, and SPAWAR PMW 176-1.
(henceforth referred to as “Excel”) spreadsheet output file, and then make changes to cell values within the ExcelComp representation of this spreadsheet. Changes to cell values are followed by the automatic recomputation of dependent cell values using ExcelComp methods. ExcelComp thus allows the client programmer to provide its users with both the data and behavior of an existing spreadsheet without the use of the original spreadsheet application program that produced it.
During the development of ExcelComp, it was realized that choices of algorithms to perform cell recomputation involve two principal trade-offs: 1) ease of use, and 2) running time performance. On the one hand, one mode of ExcelComp can simply load a file at run time that represents the spreadsheet, and then provide its services. While this mode is satisfactory for many tasks, it is unsuitable for those that require a large number of cell recomputations to support dynamic updates to real-time outputs, for example, the updating of an on-screen map that depends on thousands of cell recomputations. To support this latter task, a second mode was developed that allows faster cell recomputation at the expense of a less convenient installation procedure for the spreadsheet representation.
These considerations make ExcelComp an efficient, platform- and vendor-independent Java API that provides built-in spreadsheet emulation for application end-users. In particular, end-users are relieved of the burden of conducting their spreadsheet tasks outside the domain of their running application. In addition to the efficiency won by executing spreadsheet tasks natively, ExcelComp also obviates the need for costly additional licenses required for multiple users of the application software that produced the spreadsheet. Being written in the modern Java programming language allows the client programmer to easily integrate ExcelComp’s functionality into current software development efforts.
While other descriptions of spreadsheet algorithms are available [16][17], this paper is distinctive in its use of graph theory to improve the reader’s ability to visualize the algorithms, and to provide a basis for a proof of algorithm correctness. It also presents solutions that leverage features in the object-oriented Java API that lead to succinct, yet powerful code.
This paper focuses on the subject algorithms and the specific features of the Java language used by ExcelComp that are well-suited for their implementation. Readers interested in a more detailed specification of ExcelComp from the client programmer’s perspective may contact the author.
2 A Scenario
Before getting into the technical details that comprise this report, it would be helpful to consider a motivational scenario.
Consider a spreadsheet of financial data, where subtotals, interest earned, and a grand total might be some examples of computed quantities that each depend on entries in several cells. Analysts may use such a spreadsheet to play “what-if” games by varying values in cells that will affect some target cell, such as interest earned. The spreadsheet program would then automatically recompute all cells that are dependent on the ones changed. This capability of a spreadsheet is its hallmark, and distinguishes it from a simple table of values that have no computational relationship to one another.
Suppose that a computer program needs this spreadsheet of information and auto-update capability to carry out its tasks. This program is to provide its users with the what-if capability, and therefore requires not only cell values, but also cell formulas. Since it needs to emulate the recomputation function of the spreadsheet, it must implement algorithms that return the same recomputed values as the spreadsheet program. It is these algorithms of dependent cell recomputation that are the subject of this report.
3 Modes
ExcelComp has two modes of operation:
**Interpreted mode** requires the reading of an eXtensible Markup Language (XML) [2] representation of a spreadsheet. Once this file is parsed and loaded into ExcelComp’s data structures, the subject algorithms are implemented via ExcelComp methods. It is called *interpreted*, because cell formulas are interpreted at run time using a custom parser that recognizes a subset of Excel’s formula language.
**Compiled mode** uses cell-specific Java classes, created as an offline preprocessing task, to evaluate a cell by recursively evaluating each child cell referenced in its formula’s parse tree.
The interpreted mode is the slowest of the two. It has the advantage, however, of requiring less preprocessing, namely just the creation of the XML input file. During the development of ExcelComp, this XML file was produced by running an Excel macro [8]. It is also easier in this mode for the
client programmer to provide the application user the flexibility to apply ExcelComp to a different spreadsheet by simply changing a filename reference in ExcelComp’s constructor. The comprehensive update of all dependent cell values upon the change of a constant cell in this mode allows the ExcelComp user to highlight the newly computed dependents. Such an application was developed by the author, where changed cells are shown in a JTable [22] with changed values highlighted in red to allow the user to gain insight into the impact of the change of a cell value.
The compiled mode is much faster than the interpreted mode, and should be preferred in cases where a client demands exceptionally high execution time performance, e.g., providing a real-time screen update that depends on the recomputations. The preprocessing needed for compiled mode includes the generation of Java source code that implements the formulas of the spreadsheet. This source code generation was automated by using a Java class that uses classes produced by parser generators [1, 14] to implement a custom parser for a subset of the Excel formula language. In general, Java classes that represent each cell formula in the spreadsheet must be created, and then be referenced by the classpath option for the Java virtual machine (JVM) [18].
4 Computation Model
To provide a lingua franca for the discussion to follow, we need to identify parts of a spreadsheet that are useful to us. While the intent is to have a model that is generic, platform-, and vendor-independent, the use of Excel as a reference implementation for ExcelComp influenced the latter’s design. The language of this paper will be similarly influenced, however it is germane to any spreadsheet that adheres to the computation model described here. While a queue-based computation model has been successfully developed [16], we will find it advantageous to develop a model using graph theory. In particular, proof of correctness of the algorithms can benefit from such a treatment.
There are many ways to present data using a spreadsheet. For example, two principal classes of representation provided by Excel are the workbook, and the chart. We consider only the tabular computation environment found in a workbook. The term spreadsheet will thus be used as a synonym for workbook.
A spreadsheet is a finite set of cells arranged as a matrix. A cell is a set that contains three elements of interest:
1. a value,
2. a formula, and
3. a cell reference.
A cell’s value is the result of the computation specified by its formula. In general, this value may be a real number, a string, or some other data type. To simplify this model, we will assume that these values are real. A cell’s formula is an expression that defines a cell’s value as a function, \( f \), of a subset of the spreadsheet’s cell values. Let \( C \) denote the set of cell values for some spreadsheet. More formally then, we have
\[ f : C^n \rightarrow C, \]
where \( C^n \) is the \( n \)-fold Cartesian product of \( C \) for some positive integer \( n \). For this model, we define \( C = \mathbb{R} \). In general, a formula expresses a composite of functions in the form (1). A formula that is not composite has constant values for its arguments. Such a formula is termed a constant formula.
A cell reference is an ordered pair that specifies a cell uniquely within the spreadsheet. The Excel “A1 reference style”[5] will be used, where the first element specifies the column, and the second element specifies the row. For example, B3 designates the cell at the intersection of the second column and the third row. We use \( X_i \) to denote a variable whose value is a cell reference. In the context of a formula, a cell reference is mapped to its corresponding cell value according to (1). We thus see that, in general, a cell reference refers to a composite function that is defined by the formula for that cell. For example, if cell M1 depends on N1 and P2, and N1 depends on Q3, and P2 depends on R1, then the value of M1 expressed as a composite function is \( M1(N1(Q3), P2(R1)) \). For this expression to be fully resolved, the formulas for both Q3 and R1 must be constant formulas.
To successfully develop the subject algorithms, the scope of the set of spreadsheets to be considered must be defined. This will be done by identifying properties that serve as axioms for the spreadsheets of interest. Spreadsheets that satisfy the stated properties are termed admissible.
There is a cohesive relationship between a cell’s formula and its value, as described in the following property.
Property 1. A cell’s value is completely determined recursively by its formula.
To be clear, Property 1 states that a cell’s value depends only on its formula, but that formula in general depends on other cell values that are, in turn, dependent only on their formulas. This recursion ends when a cell with a constant formula is reached, thus resolving all of the recursive cell value references.
The formulas of a spreadsheet define a potentially involved relation among its cells. Given cell Xi, its formula may be a constant formula, or a non-constant formula that defines Xi’s value as a function of other cell values in the spreadsheet. In the latter case, we say that Xi is **dependent** on the cells referenced in its formula. That is, the value of Xi depends on the values of the cells referenced in its formula. The term “value” will often be omitted when the context of “dependent” is clear. We also refer to each cell referenced in Xi’s formula as a **child** of Xi. Similarly, Xi is a **parent** of its children. A parent and any parent of a parent is termed an **ancestor**. A child and any child of a child is termed a **descendant**.
The relation among the dependent cells in a spreadsheet may be represented as a weakly connected directed graph, \( G(V, E) \), where \( V \) is the set of vertices, and \( E \) is the set of edges. Figure 1 illustrates this spreadsheet dependency graph with an example that will be used throughout this paper.

Each vertex of the graph is a cell represented as a box. The cell reference is given at the top of each cell box, and its corresponding formula is...
given in smaller type at the bottom of the box. It is understood that the value of a cell is assigned the value computed by its formula. The cell values are omitted from the cell boxes in Figure 1 for brevity. Note that C1 has two parents: E1 and F1. This lack of a unique parent in general for each cell precludes regarding this structure as a rooted tree. Although the more general graph is the appropriate data structure for representing cell dependencies in a spreadsheet, we will find that rooted trees will also be useful in the algorithms to be described.
In describing Figure 1, some of the basics of graph theory, using [6] as a guide, will be described as needed.
In Figure 1, the set of vertices $V$ are the cells, and the set of directed edges $E$ is defined according to the parent/child relationships. Each edge is directed from a parent to a child. The set $E$ is a subset of the ordered pairs of $V$. Let a sequence of vertices be ordered such that $v_{i-1}$ is a parent of $v_i$, and let $v_{i-1}v_i$ denote the edge directed from $v_{i-1}$ to $v_i$. A sequence of edges and vertices that has the form $\{v_0v_1, v_1v_2, \ldots, v_{n-1}v_n\}$ for distinct $v_i$ is defined as a directed path linking $v_0$ and $v_n$. A directed path is a directed cycle if it consists of 2 or more vertices, and $v_n = v_0$.
We are now led to an important stipulation concerning spreadsheets.
**Property 2.** An admissible spreadsheet contains no directed cycles.
The Excel term for directed cycle is circular reference. Property 2 thus states that circular references are prohibited.
A graph from the subset of graphs just described is termed a directed acyclic graph or dag [3, section B.4].
Readers familiar with the GNU Make tool [28] will recognize this dependent cell recomputation problem as being analogous to the problem solved by Make: automatic determination of the pieces of code that require recompilation, and issuing the appropriate commands to bring the program up to date. Make uses a dependency graph model. See [26] for details and illustrations of Make’s dependency graphs, including a description of pitfalls concerning the proper use of Make to ensure correct dependency graph construction in large projects.
5 Interpreted mode
This section describes those algorithms that are implemented in the interpreted mode of ExcelComp. The integrity of the spreadsheet is preserved in this mode by recomputing all dependent cells of a cell whose constant formula (value) has changed. This behavior ensures that, upon the commitment of a new value to a cell, the entire spreadsheet will be updated to reflect the change. This matches the default behavior of Excel, where it is termed automatic calculation. Pseudocode is given in this section to highlight the salient features of the ExcelComp interpreted mode algorithms; the actual code differs in some of the implementation details.
5.1 Dependency Set Generation
Suppose that we are examining a spreadsheet for the first time, and have no a priori knowledge of its contents. Say we want to modify cell A1. By this, we mean that A1 has a constant formula that is to be changed to another constant formula. The more general act of modifying or adding a non-constant formula will not be discussed here; it is assumed that non-constant formulas remain fixed throughout our analysis.
Consider the impact that this change has on the cells that are dependent on A1 in Figure 1. First, this change in the formula causes a recomputation of A1’s value. This change will, in general, affect the values of all cells whose formulas reference A1. These cells, B1 and C1, are directly dependent on A1, and will need to be recomputed as a result. In general, the values of these direct dependents will change as a result of recomputation. These direct dependents must then be considered in the same light as A1; that is, we need to find and recompute the direct dependents of the direct dependents of A1. These cells are E1 and F1. From the point of view of A1, these latter cells are indirect dependents of A1.
The algorithm for discovering the set of dependent cells of a given cell is thus recursive. Let \( d \) be a set-valued function \( d : 2^C \rightarrow 2^C \) that computes the set of direct dependents of a subset of cells from \( C \). (Here, \( 2^C \) denotes the set of all subsets of \( C \), also known as the power set.) The procedure just described can now be expressed as
\[
A_{i+1} = d(A_i), \quad A_i \in 2^C, \quad A_0 = \{A1\}. \tag{2}
\]
\( A_0 \) is set to \( \{A1\} \) in (2) to reflect Figure 1, but in general it will be assigned to the set containing the cells that were changed.
Dependency Set Generation may be recognized as an implementation of the \textit{breadth-first search} (BFS) algorithm for graphs described in [3]. The “frontier between discovered and undiscovered vertices” described in [3] applied here divides two generations of dependencies, i.e., child/parent, parent/grandparent, etc. Also, note that we begin with a child, and then discover ancestors, in reverse to the naming convention used in [3].
For recurrence relation (2) to be practical, we must be assured that it terminates. Indeed, an essential property of an algorithm is its finiteness; according to [15], “An algorithm must always terminate after a finite number of steps.” This assurance is given now as a theorem.
\textbf{Theorem 1.} \textit{Recurrence relation (2) terminates.}
\textit{Proof.} Because the indices of \( A_i \) in the recursion are strictly increasing, it is sufficient to show that \( \exists i_{max} \geq i \leq i_{max} \).
Assume that the spreadsheet under consideration has \( N \) cells. Let \( ||A_i|| \) denote the number of parents of \( A_{i-1} \). Because of Property 2, the number of candidate parents for \( A_0 \) is \( N - 1 \), since a cell cannot be a parent of itself (thereby creating a circular reference). Similarly, the number of available parents for \( A_1 \) is at most \( N - 2 \), since both the children and grandchildren of \( A_2 \) must be excluded to avoid circular references. In general then,
\[ ||A_i|| \leq N - i, \]
and in particular, \( ||A_N|| = 0 \). At this point, no parents are available to continue further. We have thus shown that \( i \leq N \); that is, \( i_{max} = N \).
The final product of Dependency Set Generation is formed by taking the union of the sets of dependent cells found in (2). Assuming that the final index computed in (2) is \( n \), and letting \( D \) be the set of dependent cells, we have
\[ D = \bigcup_{i=1}^{n} A_i, \quad A_i \in 2^C \] \hspace{1cm} (3)
Input: Set of cells for which dependent cells will be found
Output: Set of cells that are dependent on the input set of cells
DepSetGen(depSet,m)
{
initial_size = depSet.size();
for (k=m through initial_size-1;k++) {
for (j=0 through SPRSHEET_SIZE-1;j++) {
// Find all direct dependents of depSet[k].
if (sprsheet[j].formula contains depSet[k].ref) {
depSet.add(sprsheet[j].ref);
}
}
}
DepSetGen(depSet,initial_size);
}
if (m == 0) {
depSet.delete(depSet[0]);
}
}
Figure 2: Dependency Set Generation Algorithm
5.1.1 Example
The Dependency Set Generation algorithm is codified in Figure 2. Let us apply this algorithm to finding all dependents of A1 in Figure 1.
Assume the number of cells in the spreadsheet, SPRSHEET_SIZE, is 6. The array sprsheet holds all the cells in the spreadsheet. Each element of sprsheet has the fields ref and formula for cell reference and formula, respectively. The array depSet will be built up to contain the cell references of all of the dependents of its initial value.
We make the initial call to DepSetGen with depSet initialized to contain A1, and the depSet element marker m set to 0. This marker’s value is the index of the cell in array depSet whose dependents are sought. Variable initial_size is set to the number of elements in depSet. Since depSet contains only A1, initial_size = 1. Loop counter k ranges from 0 through 0. The inner loop checks to see whether any spreadsheet cell formula contains
A1. If it does, the cell reference is added to depSet. At the time where the inner loop is finished, both B1 and C1 are appended to depSet. At the bottom of the outer loop, it is time to make the first recursive call to DepSetGen.
The actual parameters passed to DepSetGen are the newly updated 3-element depSet, and the initial size of depSet before the loops, 1. Now entering the first recursive call of DepSetGen, m is 1, initial_size is 3, depSet consists of A1, B1, and C1. Loop variable k ranges from 1 through 2. The first time through the inner loop finds all dependents of B1, and adds the one dependent found, E1, to depSet. When k is 2, the inner loop finds all dependents of C1, and adds the one dependent found, F1, to depSet.
Upon the next recursive call, the marker is set to the next unexamined element of depSet, E1 at index 3. No dependents are found. Similarly for F1 at index 4. Finally the outer loop is skipped, and control is eventually returned to the original call of DepSetGen, where m is 0. Lastly, the if statement is executed, and the initial element A1 is removed from depSet, since A1 is not dependent on itself. It is assumed depSet contains just one cell during the initial call to DepSetGen. Though not shown in Figure 2, an additional step of removing duplicate cell references from D is required to ensure that all of its elements are unique.
To conclude this section, it will be shown that the Dependency Set Generation algorithm just described indeed finds all dependents of a given cell.
**Theorem 2.** The Dependency Set Generation algorithm identifies all dependents of its input set of cells.
**Proof.** The proof is by contradiction. Assume we have a spreadsheet with cells $C_i$, $i = 0, 1, 2, \ldots, N - 1$. Suppose $\exists C_k \in \mathcal{C}$ that is dependent on $C_0$, but was not identified by the Dependency Set Generation algorithm. Then by Property 1, this dependence of $C_k$ on $C_0$ must be due only to $C_k$'s recursive formula. There must then be a directed path in $C_0$’s dependency graph from $C_k$ to $C_0$. Since $C_k$ was not identified, there is at least one cell $C_j$ in this path that was not found during the recursion. But this contradicts the step in the algorithm that says to find all direct dependents of $C_{j-1}$.
### 5.2 Recomputation of Dependency Set Members
Once the dependency set $\mathcal{D}$ has been generated, the process of recomputing these cells can begin.
To evaluate a cell, designated the original cell, each argument in its formula, the original formula, must be evaluated. This evaluation is in general a recursive procedure. By drawing a directed edge from the original cell to each cell referenced in the formula (one edge per cell in the formula), we get a rooted tree that is rooted at the original cell. By applying this algorithm recursively on each cell in the formula, we get several paths, each of which ends at a leaf having a constant-formula cell. The resulting rooted call tree is just a subset of the spreadsheet dependency graph. We may then start at the leaves of the tree to construct a string of formulas of the cells in a given path in the direction back toward the root. Each string is a self-contained sequence of formulas that allows the original cell to be evaluated.
Lastly, once all the arguments in the original formula have been evaluated, the original cell can be evaluated. Those arguments in any formula that are not members of \( D \) need not be recomputed; their current values can be used instead.
The foregoing procedure is an implementation of the depth-first search (DFS) algorithm described in [3]. In this case, the use of the terms “predecessor” and “descendant” in [3] is consistent with our usage. However, we do not use “timestamping.”
We can see a lot in common here with the Dependency Set Generation algorithm. Once again, we see a recursive procedure being described, although it is not as easily expressible in one line as in (2). Instead, we will codify the algorithm in the pseudocode given in Figure 3.
5.2.1 Example (continued)
Continuing the example begun in the Dependency Set Generation section, consider again the dependency graph in Figure 1. Suppose that \( E_1 \) is to be evaluated. The well-known left-to-right, post-order tree traversal algorithm will be used to specify the order of evaluation of \( E_1 \)'s descendants. During the first call to \( \text{EvalCell} \), the recursion depth variable \( \text{depth} \) is initialized to 0. In addition to the \( \text{ref} \) and \( \text{formula} \) fields, assume that a \( \text{cell} \) object also contains a field \( \text{nchildren} \) that gives the number of children in its \( \text{formula} \) field. The object \( \text{cell} \) also contains a \( \text{child} \) array, each of whose elements is a \( \text{cell} \) representing each child in its formula. The elements of \( \text{child} \) are stored in order of occurrence in its \( \text{formula} \), element 0 being the leftmost child, and element \( \text{nchildren}-1 \) being the rightmost.
Input: Original Cell
Output: Original Cell with newly computed value
depth = 0; // Initial value (global)
EvalCell(cell)
{
for (i=0 through cell.nchildren-1;i++) {
depth++;
EvalCell(cell.child[i]);
}
parser_str.append(cell.ref + '=' + cell.formula + ';');
if (depth == 0) {
cell.value = parse(parser_str);
}
depth--;
return cell;
}
Figure 3: Dependency Set Evaluation Algorithm
We thus begin by calling \texttt{EvalCell} with \texttt{cell.ref} set to E1. The recursive evaluation of E1’s call tree begins with the leftmost cell reference in E1’s formula, B1. B1 $\in \mathcal{D}$, and therefore must be recomputed. We then begin with the leftmost element of its formula, and find that it is the constant 1. This is a constant, and thus requires no further evaluation; we move to the next element, A1. A1 is the changed cell, and its value is known, thus no further analysis is needed.
We have now reached the end of B1’s formula, allowing us to compute its value. We thus go back up to E1 to process the next argument in its formula, C1. C1 $\in \mathcal{D}$, and therefore must be recomputed. The leftmost child of C1’s formula is A1, and has already been evaluated. C1’s next child, D1 $\notin \mathcal{D}$, and thus does not have to be recomputed. Its current value of 10 is used.
We have now recursively evaluated all of the children of E1’s formula, and completed the building of \texttt{parser_str}. This string may then be passed to a parser for evaluation of E1. ExcelComp uses an LALR parser developed by the author using the tools JLex [1] and CUP [14]. LALR stands for LookAhead Left-to-right identifying the Rightmost production, and is described in [12]. For this example, the string is:
\[
A1=2;B1=1+A1;A1=2;D1=10;C1=A1+D1;E1=B1+C1;
\]
Here, “=” stands for assignment, and “;” delimits each assignment. It is assumed that the parser stores values via the assignment statements, and that these values may be retrieved at points later in the parse string. The history of the construction of the parser string is summarized in Table 1.
<table>
<thead>
<tr>
<th>depth</th>
<th>cell.ref</th>
<th>i</th>
<th>parser_str</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>E1</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>B1</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>A1</td>
<td>0</td>
<td>A1=2;</td>
</tr>
<tr>
<td>1</td>
<td>B1</td>
<td>0</td>
<td>A1=2;B1=1+A1;</td>
</tr>
<tr>
<td>0</td>
<td>E1</td>
<td>1</td>
<td>A1=2;B1=1+A1;</td>
</tr>
<tr>
<td>1</td>
<td>C1</td>
<td>0</td>
<td>A1=2;B1=1+A1;</td>
</tr>
<tr>
<td>2</td>
<td>A1</td>
<td>0</td>
<td>A1=2;B1=1+A1;A1=2;</td>
</tr>
<tr>
<td>1</td>
<td>C1</td>
<td>1</td>
<td>A1=2;B1=1+A1;A1=2;</td>
</tr>
<tr>
<td>2</td>
<td>D1</td>
<td>0</td>
<td>A1=2;B1=1+A1;A1=2;D1=10;</td>
</tr>
<tr>
<td>1</td>
<td>C1</td>
<td>1</td>
<td>A1=2;B1=1+A1;A1=2;D1=10;C1=A1+D1;</td>
</tr>
<tr>
<td>0</td>
<td>E1</td>
<td>1</td>
<td>A1=2;B1=1+A1;A1=2;D1=10;C1=A1+D1;E1=B1+C1;</td>
</tr>
</tbody>
</table>
Table 1: History of Parser String Construction for E1
Note that there is a redundant assignment “A1=2” in parser_str. This is an example suggesting efficiency enhancements that can be seen to improve the performance of these algorithms. Improvements include, but are not necessarily limited to:
1. Recompute a cell value only once.
2. Do not recursively evaluate cells that are not in \( \mathcal{D} \).
Although these improvements surely are desirable for minimizing the number of algorithm steps, experience with their use in ExcelComp revealed that the time to run the additional code required to implement these improvements largely cancels out the benefits of fewer cell evaluations.
More ideas concerning the speed-up of interpreted mode are given in the Algorithm Complexity section.
6 Compiled mode
While the compiled mode agrees with interpreted mode with respect to the preservation of spreadsheet integrity, it uses a clever postponement of computation technique to update a dependent cell’s value just prior to the reading of its value via its accessor method. Using this deferred recomputation strategy, the execution time associated with recomputing those dependent cells whose values are never accessed is eliminated. This deferred recomputation is similar to Excel’s *manual calculation* mode, where the user specifies when recomputation is to occur thus deferring immediate recomputation. It is also similar to the *mark-sweep* garbage collection algorithm [29].
Compiled mode is implemented by ExcelComp’s use of the Cell API [27]. The highlight of this mode is preserving spreadsheet integrity while improving the running time performance of ExcelComp. Several techniques are used to meet this goal, including the use of:
1. the Java Reflection API
2. Hash containers, and
3. Deferred recomputation of dependent cells.
6.1 The Cell Class
Cell is an abstract Java base class that provides a framework for modeling the cells of a spreadsheet, each of which is represented by a class derived from Cell named CellXi, where Xi denotes the A1-style reference to its corresponding cell.
On its initial invocation, Cell’s accessor class method `getCell` instantiates a CellXi object via the Java Reflection API [25]. This technique allows a Java class to instantiate another class whose name is created at run time by the calling method. To improve the efficiency of subsequent accesses, Cell has a class variable `workbook` to reference a HashMap of references to previously instantiated CellXi objects. Similarly, each CellXi object has a HashSet named `dependencies` that contains references to CellXi objects that correspond to cells that are direct dependents (parents) of the CellXi object that owns `dependencies`.
In a CellXi’s constructor, each child’s instance method `addDependency` is called to add Xi to that child’s `dependencies` set. The use of `getCell` during
this process causes each child’s object, on its first access, to be initialized with its constructor. A DFS traversal of \texttt{CellXi}’s call tree thus occurs so that each traversed parent appears in each of its children’s \texttt{dependencies} set. The set of all such parents is the set of \texttt{discovered ancestors}. Both \texttt{workbook} and \texttt{dependencies} contain only minimal subsets of the full set of their respective data that describes the entire spreadsheet as determined by the history of \texttt{Cell} method calls. These subsets are updated as necessary, and suffice for computing correct results when recomputation of cell values is necessary.
6.2 Preprocessing
Figure 4 details the differences in the preprocessing requirements for the two modes of ExcelComp.
The preprocessing necessary for compiled mode begins with the same XML input file used in interpreted mode. A Java API named \texttt{GenCell} [10] is run on this input file to produce a set of Java source files. Each source file defines a \texttt{CellXi} class that corresponds to a spreadsheet cell. \texttt{GenCell} uses a JLex/CUP-based parser similar to that used in interpreted mode to allow the translation of a supported subset of the Excel formula language into the appropriate Java statements. The important difference between the two parsers is in their output. In interpreted mode, the ExcelComp parser returns a string at run time that represents a newly-computed value. For ExcelComp’s compiled mode, the \texttt{GenCell} parser returns Java source code that is written to a set of source files as an offline preprocessing task. Once \texttt{GenCell} has completed generating all of the source files, the Java compiler is run to compile these files into class files of executable bytecode. The ExcelComp user must then ensure that the Java classpath contains the appropriate
6.3 Dependency Set Generation
The nature of the preprocessing performed in compiled mode allows ExcelComp to handle Dependency Set Generation as a distributed, on-demand task rather than as an explicit set of steps conducted immediately after a cell value is changed as in interpreted mode.
6.4 Deferred Recomputation of Dependent Cells
Rather than requiring that the immediate recomputation of all members of \( D \) occur after a cell’s value has been changed, compiled mode defers the recomputation of any member of \( D \) until the client programmer requests its value via an ExcelComp accessor method. This algorithm saves considerable time when compared to its interpreted mode counterpart, seeing that the recomputation of many dependencies whose value is never sought is avoided.
6.5 Getting a Value
Each \texttt{CellXi} object’s \texttt{getValue} method contains the Java encoding of the formula for cell Xi. The value of the cell is recomputed and stored as the instance variable \texttt{val} only when that object’s boolean \texttt{dirty} flag is true; this flag thus allows this method to avoid unnecessary recomputation. When recomputation is unnecessary, \texttt{getValue} just returns the value of \texttt{val}.
6.6 Setting a Value
When a \texttt{CellXi} object’s \texttt{setValue} method is called, its instance variable \texttt{val} is set to the desired value, and a DFS traversal of all of Xi’s discovered ancestors is performed to ensure that each such ancestor’s \texttt{dirty} flag is set to \texttt{true}. This ensures that all ancestors’ values are recomputed on a subsequent call to an ancestor’s \texttt{getValue} method. Note that recomputations are only done if an ancestor’s \texttt{getValue} method is called, thus saving the time of recomputing ancestor values that may never be accessed.
6.7 Example
The following example illustrates the workings of compiled mode using Figure 1. Given the spreadsheet represented by this graph, we will use compiled mode to set A1’s value to 2, and then get the value of E1. Observe that E1’s value with A1 set to 1 is 13. By changing A1’s value to 2, we expect the new value of E1 to be 15.
The first statement given by the client program is:
```java
Cell.getCell("A1").setValue(2);
```
The initial part of this statement, `Cell.getCell("A1")`, creates a new instance of `CellA1`, since one does not yet exist. This new instance is created using the `forName` class method in the package `java.lang.Class` [19]. `CellA1`’s base class constructor `Cell()` is first called to set `CellA1`’s `dirty` flag to `true`. `CellA1`’s `val` variable is set to 1 by its constructor. The last part of this statement calls `Cell`’s `setValue` method to set `val` to 2. In general, `setValue` recursively marks all of A1’s discovered ancestors as dirty. However, since no ancestors have yet been discovered, and thus no corresponding `CellXi` objects have yet been instantiated, no such marking occurs here. We are thus left with one instance of `CellA1` that is marked as dirty and has a value of 2.
We now get the value of E1. The appropriate statement is:
```java
Cell.getCell("E1").getValue();
```
The first part of the statement behaves in the same way as for A1 described above, only now the newly created object is an instance of `CellE1`. In addition, `CellE1`’s constructor initiates a DFS traversal of all of E1’s descendants to update each descendant’s `HashSet dependencies`. For this example, both `CellB1`’s and `CellC1`’s `dependencies` sets are updated by having a reference to `CellE1` added. Note that `CellC1`’s `dependencies` set does not refer to `CellF1` since, although F1 is dependent on C1, it is not a descendant of E1. Such a reference to `CellF1` need only be added if F1 is the subject of future method calls. The last part of this statement checks to see whether the instance of `CellE1` has been marked as dirty. Since `CellE1` was just constructed anew, it is marked as dirty and thus its value must be recomputed (in this case, computed for the first time). The statement in `CellE1`’s `getValue` method that accomplishes this is:
the val = Cell.getCell("B1").getValue() +
Cell.getCell("C1").getValue();
Since the objects for B1, C1, and D1 were all marked as dirty during the DFS traversal in CellE1's constructor, each object's getValue method will recompute the value for that cell. Each object's getValue method resets that object's dirty flag to false after the recomputation. Subsequent accesses to CellXi values that have not been affected by a change to a descendant's value simply return the value stored in val with no recomputation necessary.
7 Performance
7.1 Tests
ExcelComp was tested for its running time performance in the SATCOM Availability Analyst (SA2) Java application [7]. The addition of ExcelComp API calls to SA2's map display function was chosen for test due to its demanding requirement that 8,518 data points be updated for an on-screen Mercator map in such a way that the user is not burdened by long wait times for a complete update of the map. Processing each of the data points required 3 calls to ExcelComp methods; 2 of these calls each changed a cell value from the input spreadsheet, and the last call read back a cell value of interest from the newly updated spreadsheet. The map display function was selected and run 10+ times in each mode to characterize ExcelComp's performance. Running times associated with the first invocation of the map function were greater than subsequent trials, and thus were considered outliers and removed from the representative data. These larger values probably reflect JVM-related setup steps that are not required on subsequent trials.
The tests were conducted on a Hewlett-Packard HP OmniBook 4150 B running under Microsoft Windows 98 on a Pentium III 650 MHz processor. SA2 was run using the Sun Microsystems JVM version 1.3. Running times were computed as the difference in the start and end times returned by the Java method System.currentTimeMillis() [23].
The results of the performance tests are summarized in Table 2. The sample standard deviation is computed as the positive square root of the unbiased sample variance. The large difference in performance between the modes highlights how compiled mode can provide a very acceptable performance level in a case where interpreted mode, requiring over 5 minutes to
Table 2: ExcelComp Running Time (milliseconds) Performance over 10 Samples
<table>
<thead>
<tr>
<th></th>
<th>Compiled Mode</th>
<th>Interpreted Mode</th>
</tr>
</thead>
<tbody>
<tr>
<td>Average</td>
<td>677</td>
<td>347057</td>
</tr>
<tr>
<td>Sample Standard Deviation</td>
<td>116</td>
<td>254</td>
</tr>
</tbody>
</table>
optional, would be unacceptably slow. In this particular case, it is essential that compiled mode be chosen to make the use of ExcelComp feasible.
7.2 Algorithm Complexity
The graph traversal algorithms that underlie ExcelComp are well known to be efficient. Both DFS and BFS have running times that are linear in the size of the graph’s adjacency list. Specifically, BFS is $O(V + E)$, and DFS is $\Theta(V + E)$ [3, sect. 22.2, 22.3].
Interpreted mode does not construct an adjacency list, and could very well benefit from a redesign to create this list upon the loading of the spreadsheet in the ExcelComp constructor. Because this construction must take place at run time, the user would incur a one-time performance penalty for this initialization step. The absence of an adjacency list suggests that interpreted mode’s running time is probably greater than the linear time cited above.
Compiled mode, on the other hand, does use a variation of adjacency lists in CellXi’s dependencies set. However, while an adjacency list stores references to children, dependencies stores references to parents of CellXi. Compiled mode incurs a setup penalty during the discovery of cells, but subsequent accesses to CellXi objects are more efficient through the use of dependencies and workbook. Its use of adjacency lists, DFS traversals, and the efficient Java collections framework suggests that compiled mode has a running time that is close to $\Theta(V + E)$.
8 Conclusion
We have seen that a graph representation of spreadsheet cell dependencies provides insight into the requirements of the algorithms used for the automatic recomputation of dependent cells. Straightforward implementation
of well-known graph traversal algorithms suffices for correct recomputation, however the adaptation of a well-studied garbage collection algorithm along with facilities made available in the Java language enable client programs to run much faster, given some additional preprocessing.
The client programmer should choose the mode of ExcelComp according to an analysis of the application’s run time requirements and the tradeoffs between the modes as described in this paper.
9 Acknowledgements
I would like to thank Deborah Schuh and Dr. Joseph Rushanan for their reviews and helpful comments on the contents of this paper.
References
|
{"Source-Url": "https://www.mitre.org/sites/default/files/pdf/francoeur_algorithms.pdf", "len_cl100k_base": 9834, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 50847, "total-output-tokens": 11831, "length": "2e13", "weborganizer": {"__label__adult": 0.0002372264862060547, "__label__art_design": 0.0002053976058959961, "__label__crime_law": 0.0002236366271972656, "__label__education_jobs": 0.00039505958557128906, "__label__entertainment": 4.1365623474121094e-05, "__label__fashion_beauty": 9.715557098388672e-05, "__label__finance_business": 0.0001819133758544922, "__label__food_dining": 0.000255584716796875, "__label__games": 0.0003592967987060547, "__label__hardware": 0.0005984306335449219, "__label__health": 0.0002689361572265625, "__label__history": 0.00014293193817138672, "__label__home_hobbies": 6.288290023803711e-05, "__label__industrial": 0.000270843505859375, "__label__literature": 0.00015854835510253906, "__label__politics": 0.00014674663543701172, "__label__religion": 0.00030303001403808594, "__label__science_tech": 0.00933074951171875, "__label__social_life": 5.495548248291016e-05, "__label__software": 0.00624847412109375, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002079010009765625, "__label__transportation": 0.000316619873046875, "__label__travel": 0.00014400482177734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46085, 0.03212]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46085, 0.70148]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46085, 0.89567]], "google_gemma-3-12b-it_contains_pii": [[0, 1408, false], [1408, 4044, null], [4044, 6245, null], [6245, 8579, null], [8579, 10871, null], [10871, 12548, null], [12548, 14786, null], [14786, 17073, null], [17073, 19178, null], [19178, 20716, null], [20716, 23172, null], [23172, 25798, null], [25798, 27556, null], [27556, 29652, null], [29652, 31757, null], [31757, 33637, null], [33637, 35465, null], [35465, 37768, null], [37768, 40042, null], [40042, 42042, null], [42042, 43550, null], [43550, 45171, null], [45171, 46085, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1408, true], [1408, 4044, null], [4044, 6245, null], [6245, 8579, null], [8579, 10871, null], [10871, 12548, null], [12548, 14786, null], [14786, 17073, null], [17073, 19178, null], [19178, 20716, null], [20716, 23172, null], [23172, 25798, null], [25798, 27556, null], [27556, 29652, null], [29652, 31757, null], [31757, 33637, null], [33637, 35465, null], [35465, 37768, null], [37768, 40042, null], [40042, 42042, null], [42042, 43550, null], [43550, 45171, null], [45171, 46085, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46085, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46085, null]], "pdf_page_numbers": [[0, 1408, 1], [1408, 4044, 2], [4044, 6245, 3], [6245, 8579, 4], [8579, 10871, 5], [10871, 12548, 6], [12548, 14786, 7], [14786, 17073, 8], [17073, 19178, 9], [19178, 20716, 10], [20716, 23172, 11], [23172, 25798, 12], [25798, 27556, 13], [27556, 29652, 14], [29652, 31757, 15], [31757, 33637, 16], [33637, 35465, 17], [35465, 37768, 18], [37768, 40042, 19], [40042, 42042, 20], [42042, 43550, 21], [43550, 45171, 22], [45171, 46085, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46085, 0.07025]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
3216257f8c790457a6c37dc2e7eed0c4c5870b4c
|
[REMOVED]
|
{"Source-Url": "http://users.dimi.uniud.it/~agostino.dovier/PAPERS/Gelatofaked.pdf", "len_cl100k_base": 16174, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 64268, "total-output-tokens": 18761, "length": "2e13", "weborganizer": {"__label__adult": 0.00040340423583984375, "__label__art_design": 0.0005631446838378906, "__label__crime_law": 0.0004551410675048828, "__label__education_jobs": 0.003877639770507813, "__label__entertainment": 0.00012803077697753906, "__label__fashion_beauty": 0.00023698806762695312, "__label__finance_business": 0.0004553794860839844, "__label__food_dining": 0.0005254745483398438, "__label__games": 0.0011281967163085938, "__label__hardware": 0.0012445449829101562, "__label__health": 0.0007801055908203125, "__label__history": 0.0005025863647460938, "__label__home_hobbies": 0.00023674964904785156, "__label__industrial": 0.0009737014770507812, "__label__literature": 0.0004019737243652344, "__label__politics": 0.0004494190216064453, "__label__religion": 0.0008435249328613281, "__label__science_tech": 0.11041259765625, "__label__social_life": 0.0001888275146484375, "__label__software": 0.009368896484375, "__label__software_dev": 0.86474609375, "__label__sports_fitness": 0.0004887580871582031, "__label__transportation": 0.0011386871337890625, "__label__travel": 0.0003159046173095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63654, 0.02547]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63654, 0.57415]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63654, 0.85298]], "google_gemma-3-12b-it_contains_pii": [[0, 4804, false], [4804, 11819, null], [11819, 17475, null], [17475, 20735, null], [20735, 24570, null], [24570, 28466, null], [28466, 32406, null], [32406, 35248, null], [35248, 38058, null], [38058, 40594, null], [40594, 45275, null], [45275, 48504, null], [48504, 54238, null], [54238, 55555, null], [55555, 60119, null], [60119, 63654, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4804, true], [4804, 11819, null], [11819, 17475, null], [17475, 20735, null], [20735, 24570, null], [24570, 28466, null], [28466, 32406, null], [32406, 35248, null], [35248, 38058, null], [38058, 40594, null], [40594, 45275, null], [45275, 48504, null], [48504, 54238, null], [54238, 55555, null], [55555, 60119, null], [60119, 63654, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63654, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63654, null]], "pdf_page_numbers": [[0, 4804, 1], [4804, 11819, 2], [11819, 17475, 3], [17475, 20735, 4], [20735, 24570, 5], [24570, 28466, 6], [28466, 32406, 7], [32406, 35248, 8], [35248, 38058, 9], [38058, 40594, 10], [40594, 45275, 11], [45275, 48504, 12], [48504, 54238, 13], [54238, 55555, 14], [55555, 60119, 15], [60119, 63654, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63654, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
4c85f95dba6b46d4f8c281339336606770342f9b
|
The 2006 Federated Logic Conference
The Seattle Sheraton Hotel and Towers
Seattle, Washington
August 10 - 22, 2006
IJCAR’06 Workshop
DISPROVING’06:
Non-Theorems, Non-Validity, Non-Provability
August 16th, 2006
Proceedings
Editors:
W. Ahrendt, P. Baumgartner, H. de Nivelle
A Fast Disprover for √eriFun
Markus Aderhold, Christoph Walther, Daniel Szallies, and Andreas Schlosser
Fachgebiet Programmiermethodik, Technische Universität Darmstadt, Germany
{aderhold, chr.walther, szallies, schlosser}@pm.tu-darmstadt.de
Abstract. We present a disprover for universal formulas stating conjectures about functional programs. The quantified variables range over freely generated polymorphic data types, thus the domains of discourse are infinite in general. The objective in the development was to quickly find counter-examples for as many false conjectures as possible without wasting too much time on true statements. We present the reasoning method underlying the disprover and illustrate its practical value in several applications in an experimental version of the verification tool √eriFun.
1 Introduction
As a common experience, specifications are faulty and programs do not meet their intention. Program bugs range from easily detected simple lapses (such as not excluding division by zero or typos when setting array bounds) to deep logical flaws in the program design which emerge elsewhere in the program and therefore are hard to discover.
But programmers' faults are not the only source of bugs. State-of-the-art verifiers synthesize conjectures about a program that are needed (or are at least useful) in the course of verification: Statements may be generalized to be qualified for a proof by induction, the verifier might generate termination hypotheses that ensure a procedure's termination, or it might synthesize conjectures justifying an optimization of a procedure. Sometimes these conjectures can be faulty, i.e. over-generalizations might result, the verifier comes up with a wrong idea for termination, or an optimization simply does not apply.
Verifying that a program meets its specification is a waste of time in all these cases, and therefore one should begin with testing the program beforehand. However, as testing is a time consuming and boring task, machine support is welcome (not to say needed) to relieve the human from the test-and-verify cycle. Program testing can be reformulated as a verification problem: A program conjecture $\phi$ fails the test if the negation of $\phi$ can be verified. However, for proving these negated conjectures, a special verifier—called a disprover—is needed.
In this paper, we present such a disprover for statements about programs written in the functional programming language $\mathcal{L}$ [14], which has been integrated into an experimental version [12] of the interactive verification tool √eriFun [15, 16]. The procedures of $\mathcal{L}$-programs operate over freely generated...
structure bool <= true, false
structure N <= 0, +\((\vdash : N)\)
structure list\[@A\] <= ε, \[infix\] ::(hd : @ A, tl : ... by Comon and Lescanne for solving
equational problems. As disproving is semi-decidable, a complete disprover can
60
Fig. 1. A simple \(\mathcal{L}\)-program
polymorphic data types and are defined by using recursion, case analyses, let-expressions, and functional composition. The data types \(\text{bool}\) for Boolean values and \(\text{N}\) for natural numbers \(\mathbb{N}\) as well as equality \(\equiv : \@ A \times \@ A \rightarrow \text{bool}\) and a procedure \(> : \@ N \times \@ N \rightarrow \text{bool}\) deciding the \(>\)-relation on \(\mathbb{N}\) are predefined in \(\mathcal{L}\). Figure 1 shows an example of an \(\mathcal{L}\)-program that defines a polymorphic data type \(\text{list}[@A]\), list concatenation \(<>\), and list reversal \(\text{rev}\). In this program, the symbols \(true, false\) are constructors of type \(\text{bool}\), \(\vdash\)\(\ldots\) is the selector of the \(\mathbb{N}\)-constructor \(+\)\(\ldots\), and \(hd\) and \(tl\) are the selectors of constructor \(::\) for lists. Subsequently, we let \(\Sigma(P)\) denote the signature of all function symbols defined by an \(\mathcal{L}\)-program \(P\), and \(\Sigma(P)^c\) is the signature of all constructor function symbols in \(P\). An operational semantics for \(\mathcal{L}\)-programs \(P\) is defined by an interpreter \(\text{eval}_P : \mathcal{T}(\Sigma(P)) \mapsto \mathcal{T}(\Sigma(P)^c)\) which maps ground terms to constructor ground terms of the respective monomorphic data types using the definition of the procedures and data types in \(P\), cf. [10,14,18].
In \(\mathcal{L}\), statements about programs are given by expressions of the form \(\text{lemma}\ name <= \forall x_1 : \tau_1, \ldots, x_n : \tau_n b\) (cf. Fig. 1), where \(b\)—called the body of the lemma—is a Boolean term built with the variables \(x_i\) (of type \(\tau_i\)) from a set \(\mathcal{V}\) of typed variables and the function symbols in \(\Sigma(P)\), where case analyses (like in procedure definitions) and the truth values are used to represent connectives. Hence the general form of the proof obligations we are concerned with are universal formulas \(\phi = \forall x_1 : \tau_1, \ldots, x_n : \tau_n b\). Disproving such a formula \(\phi\) is equivalent to proving its negation \(\neg\phi \equiv \exists x_1 : \tau_1, \ldots, x_n : \tau_n \neg b\), and as the domain of each type \(\tau_i\) can be enumerated, disproving \(\phi\) is a semi-decidable problem. A disproof of \(\phi\) (also called a witness of \(\neg\phi\)) can be represented by a constructor ground substitution \(\sigma\) such that \(\text{eval}_P(\sigma(b)) = false\). Consequently, disproving \(\phi\) can be viewed as solving the (semi-decidable) equational problem \(b \equiv false\).
To solve such an equation, we develop two disproving calculi that constitute the two phases of our disprover. The inference rules of both calculi are inspired by the calculus proposed in [6] by Comon and Lescanne for solving equational problems. As disproving is semi-decidable, a complete disprover can
be developed. However, as truth of universal formulas \( \phi \) is not semi-decidable by Gödel’s first incompleteness theorem, disproving \( \phi \) is undecidable. Therefore a complete (and sound) disprover need not terminate. But the use in an interactive environment—such as the one \texttt{VeriFun} provides—requires termination of all subsystems, hence completeness must be sacrificed in favor of termination. The use in an interactive environment also demands runtime performance, so particular care is taken to achieve early failure on non-disprovable conjectures.
In Section 2, we explain how we disprove universal formulas \( \phi \). In Section 3, we demonstrate the practical use of our disprover when \texttt{VeriFun} employs it in different disproving applications. We compare our proposal with related work in Section 4 and conclude with an outlook on future work in Section 5.
## 2 Disproving Universal Formulas
Our disproving method proceeds in two phases. The first phase is based on the elimination calculus (\( \mathcal{E} \)-calculus for short). Its language is given by \( \mathcal{L}_E := \{ \langle E, \sigma \rangle \in \mathcal{L}_E \times \text{Sub} \mid \forall \sigma \cap \text{dom}(\sigma) = \emptyset \} \). \( \mathcal{L}_E \) is the set of clauses in which atoms are built with terms from \( T(\Sigma(P), \mathcal{V}) \) and the predicate symbols \( \models \) of type \( \forall \mathcal{A} \times \forall \mathcal{A} \) and \( \supset \) of type \( \forall \mathbb{N} \times \forall \mathbb{N} \); negative literals are written \( t_1 \not\equiv t_2 \) or \( t_1 \not\supset t_2 \), respectively. \( \text{Sub} \) denotes the set of all constructor ground substitutions \( \sigma \), i.e. \( \sigma(v) \in T(\Sigma(P)^\mathcal{E}) \) for each \( v \in \text{dom}(\sigma) \). The inference rules of the \( \mathcal{E} \)-calculus (defined below) are of the form "\( \langle E, \sigma \rangle \langle \mathcal{E}(E), \sigma \circ \lambda \rangle \)”, if \( \text{COND} \)”, where \( \text{COND} \) stands for a side condition that has to be satisfied to apply the rule, and \( \lambda \in \text{Sub} \). An \( \mathcal{E} \)-deduction is a sequence \( \langle E_1, \sigma_1 \rangle, \ldots, \langle E_n, \sigma_n \rangle \) such that for each \( i \), \( \langle E_i, \sigma_i \rangle \) originates from \( \langle E_i, \sigma_i \rangle \) by applying an \( \mathcal{E} \)-inference rule, and \( \langle E_1, \sigma_1 \rangle \vdash_\mathcal{E} \langle E_n, \sigma_n \rangle \) denotes the existence of such an \( \mathcal{E} \)-deduction.
The second phase of our disproving method uses the solution calculus (\( \mathcal{S} \)-calculus for short). It operates on \( \mathcal{L}_S := \{ \langle E, \sigma \rangle \in \mathcal{L}_S \times \text{Sub} \mid \forall \sigma \cap \text{dom}(\sigma) = \emptyset \} \). \( \mathcal{L}_S \subset \mathcal{L}_E \) is the set of clauses in which atoms are formed with predicate symbols \( \models \), \( \supset \) and terms from \( T(\Sigma(P'), \mathcal{V}) \), where \( \Sigma(P') \) emerges from \( \Sigma(P) \) by removing the function symbols \( f \) and \( \mathcal{A} \) as well as all procedure function symbols. The form of the \( \mathcal{S} \)-inference rules (defined below) and deduction \( \vdash_\mathcal{S} \) are defined identically to the \( \mathcal{E} \)-calculus, and \( \langle E, \sigma \rangle \vdash_\mathcal{S} \langle E'', \sigma'' \rangle \) denotes the existence of a composed deduction \( \langle E, \sigma \rangle \vdash_\mathcal{E} \langle E', \sigma' \rangle \vdash_\mathcal{S} \langle E'', \sigma'' \rangle \).
A substitution \( \sigma \) is an \( \mathcal{E} \)-substitution for a clause \( E \in \mathcal{L}_E \), \( \sigma \in \text{Sub}_E \) for short, iff \( \sigma(v) \in T(\Sigma(P)^\mathcal{E}) \) for each \( v \in \mathcal{V}(E) \). We write \( \sigma \vdash l \) if an \{l\}-substitution \( \sigma \) solves an \( \mathcal{E} \)-literal \( l \), defined by \( \sigma \vdash t_1 \equiv t_2 \) iff \( \text{eval}_P(\sigma(t_1)) = \text{eval}_P(\sigma(t_2)) \) and \( \sigma \vdash t_1 \supset t_2 \) iff \( \text{eval}_P(\sigma(t_1)) > \text{eval}_P(\sigma(t_2)) \). An \( \mathcal{E} \)-clause \( E \) is solved by \( \sigma \in \text{Sub}_E \), \( \sigma \vdash E \) for short, iff \( \sigma \vdash l \) for each \( l \in E \). Both calculi are sound in the sense that \( \langle E, \sigma \rangle \vdash_\mathcal{E} \langle E', \sigma' \rangle \) entails \( \theta \vdash \sigma'(E) \) for each \( \theta \in \text{Sub}_E \) with \( \theta \vdash E' \), and \( \sigma \subseteq \sigma' \).
To disprove a conjecture \( \phi = \forall x_1 : \tau_1, \ldots, x_n : \tau_n \ b \), we search for a deduction \( \langle \{ b \equiv \text{false} \}, \varepsilon \rangle \vdash_\mathcal{S} \langle \emptyset, \sigma \rangle \).
\(^1\) Since the domain of each data type is at most countably infinite, we actually use monomorphic types \( \tau'_i \) instead of the polymorphic types \( \tau_i \) in \( \phi \) without loss of gen-
The inference rules of both calculi are given in the subsequent paragraphs. The most important rules are formally defined whereas others (denoted by rule numbers in italics) are only informally described for the sake of brevity. In order to reduce the depth of the terms in $\mathcal{E}$- and $\mathcal{S}$-literals, some of the rules introduce fresh variables (called “auxiliary unknowns” in [6]), which we denote by $w$ and $w'$. Terms are written as $t$, $t_1$ and $t_2$, and $v$ and $v'$ denote variables.
### 2.1 Inference rules of the $\mathcal{E}$-calculus
The $\mathcal{E}$-calculus consists of the inference rules (1)–(3) and (5)–(8) of Fig. 2 plus rules (4) and (9)–(10) described informally. The purpose of the $\mathcal{E}$-inference rules is to eliminate all occurrences of $if$, $\pi$, and of procedure function symbols so that some $(E, \sigma) \in L_\mathcal{E}$ is obtained by an $\mathcal{E}$-deduction $\langle \{b \equiv false\}, \epsilon \rangle \vdash_\mathcal{E} (E, \sigma)$. All rules are supplied with an additional side condition (*) demanding $E \notin L_\bot$ for each $(E, \sigma)$ they apply to, where $L_\bot$ is the set of all $\mathcal{E}$-clauses containing evident type $\tau_i'$ originates from type $\tau_i$ by instantiating each type variable in $\tau_i$ with type $\mathbb{N}$. E.g., to disprove $\forall k, l : \text{list}[\mathbb{N}, A] \ k < l \Rightarrow l < k$, the monomorphic instance $\forall k, l : \text{list}[\mathbb{N}] \ k < l \Rightarrow l < k$ is considered.
2. Assuming $t \equiv \text{cons}(\ldots)$ is sound, as well-typedness is demanded.
3. This elimination is possible whenever $b \equiv false$ is solvable, as each procedure call needs to be unfolded by rule (3) only finitely many times.
contradictions such as \{t \neq t, \ldots\}, \{0 > t, \ldots\} or \{t \doteq 0, t \doteq 1, \ldots\}. This proviso corresponds to the elimination of trivial disequations and the clash rule for equations in [6].
Rule (1) eliminates an if-conditional and rule (2) eliminates an inner procedure call from a literal.\(^4\) Rule (3) unfolds a call of procedure \(f\) that occurs as a direct argument in a literal. A procedure \(f\) is represented here by a set \(D_f\) of triples \((C, \overline{C}, r)\) such that \(r\) is the if-free result term in the procedure body of \(f\) obtained under the conditions \(C \cup \{\neg c \mid c \in \overline{C}\}\), where \(C\) and \(\overline{C}\) consist of if-free Boolean terms only. E.g., \(D_{<>}\) consists of two triples, viz. \(d_1 = (\{k \doteq \varepsilon\}, \emptyset, l)\) and \(d_2 = (\emptyset, \{k \doteq \varepsilon\}, hdl(k) := (tl(k) <> l))\), for procedure \(<\>\) of Fig. 1.
A further rule (4) translates inequations and equations expressed with symbols from \(\Sigma(P)\) into \(S\)-literals: e.g., “\(t_1 > t_2 \doteq \text{false}\)” is translated into “\(t_1 \neq t_2\)”. Rule (6) is like the decomposition rule from [6] for inequations, but restricted to constructors. Another rule (9)—corresponding to the elimination of trivial equations and clash for inequations in [6]—removes trivial literals such as \(t = t\) or \(0 \neq t\) from a clause \(E \in \mathcal{L}_E\) and supplies arbitrary values in \(\lambda\) for variables that disappear from the clause. Finally, literals are simplified by rule (10), which replaces subterms of the form \(\text{sel}_i(\text{cons}(t_1, \ldots, t_n))\) with \(t_i\). This rule does not exist in [6] and accounts for data type definitions with selectors.
\[\begin{align*}
(15) & \quad \frac{E \ni \{t_1 \odot t_2\}, \sigma}{E \cup \{w_1 \doteq t_1, w_2 \doteq t_2, w_1 \odot w_2\}, \sigma} \quad \text{, if } t_1, t_2 \notin \mathcal{V} \\
(16) & \quad \frac{E \ni \{v \neq t\}, \sigma}{E \cup \{v \neq t, v \doteq \text{cons'(w_1, \ldots, w_n)}\}, \sigma} \quad \text{, if } t \in \mathcal{V} \text{ or } t = \text{cons(...)} \\
(17) & \quad \frac{E \ni \{t_1 \neq t_2\}, \sigma}{E \cup \{t_2 \doteq t_1\}, \sigma} \quad \text{, if } t_1, t_2 \notin \mathcal{V} \\
(18) & \quad \frac{E \ni \{t_1 \neq t_2\}, \sigma}{E \cup \{t_1 \doteq t_2\}, \sigma} \quad \text{, if } t_1, t_2 \in \mathcal{T}(\Sigma(P'), \mathcal{V})
\end{align*}\]
Fig. 3. Inference rules of the \(S\)-calculus
\(^4\) For a literal \(l = t_1 \odot t_2, \upharpoonright_{\pi}\) is the subterm of \(l\) at occurrence \(\pi \in \text{Occ}(l)\), and \(l|_{\pi \leftarrow t}\) is obtained from \(l\) by replacing \(l|_{\pi\,n}\) in \(l\) with \(t\). We use “\(\upharpoonright\)” as a shorthand for the succedents of different rules with the same premises and side conditions. All rules are applied “modulo symmetry” of \(\doteq\) and \(\neq\) if possible (e.g., see rule (5)).
2.2 Inference rules of the \(S\)-calculus
The \(S\)-calculus consists of the inference rules (5)–(19), to which the additional side condition (*) applies as well. Rules (5)–(10) are the same as in the \(E\)-calculus, rules (11)–(14) are “structural” rules to merge (in)equations, to replace variables with terms, and to solve equations \(v \doteq t\) with \(t \in \mathcal{T}(\Sigma(P)\varepsilon)\) by substitutions \(\lambda := \{v/t\}\).
Rules (15)–(18) are given in Fig. 3. Rule (15) removes non-variable arguments from an \(S\)-literal, rule (16) is basically a case analysis on \(v\) using some
63
constructor cons’, and rules (17) and (18) eliminate negative literals. Rule (19) invokes a constraint solver. We call an S-literal \( t_1 \circ t_2 \) a constraint literal iff at least one of the \( t_i \) is a variable of type \( \mathbb{N} \), \( \Sigma(t_1) \cup \Sigma(t_2) \subseteq \{0, +, -\} \), and \( \circ \in \{\div, \succ\} \); \( C \) is the set of all constraint literals. When none of the other S-rules is applicable, rule (19) passes the constraint literals \( E \cap C \) to a modified version of INDIGO [5]: To terminate on cyclic constraints like \( \{x \succ y, y \succ x\} \), we simply limit the number of times a constraint can be used by the number of variables in \( E \cap C \). If \( E \cap C \) can be satisfied, we get a solving assignment \( \lambda \in \text{Sub}_{E \cap C} \); otherwise rule (19) fails.
### 2.3 Search Heuristic and Implementation
By the inherent indeterminism of both calculi, search is required for computing \( \mathcal{E} \) - and \( \mathcal{S} \) - deductions: An \( \mathcal{E} \)-clause \( E \) to be solved defines an infinite \( \mathcal{E} \)-search tree \( T^E_{\mathcal{E}} \) whose nodes are labeled with elements from \( \mathbb{L}_E \). The root node of \( T^E_{\mathcal{E}} \) is labeled with \( \langle E, \varepsilon \rangle \), and \( \langle E'', \sigma'' \rangle \) is a successor of node \( \langle E', \sigma' \rangle \) iff \( \langle E'', \sigma'' \rangle \) originates from \( \langle E', \sigma' \rangle \) by applying some \( \mathcal{E} \)-inference rule. The leaves of \( T^E_{\mathcal{E}} \) are given by the \( \mathcal{E} \)-success and the \( \mathcal{E} \)-failure nodes: \( \langle E', \sigma' \rangle \) is an \( \mathcal{E} \)-success node iff \( E' \in L_S \setminus L_{\perp} \), and \( \langle E', \sigma' \rangle \) is an \( \mathcal{E} \)-failure node iff \( E' \in L_\perp \). A path from the root node to an \( \mathcal{E} \)-success node is called an \( \mathcal{E} \)-solution path. All these notions carry over to \( \mathcal{S} \)-search trees \( T^S_{\mathcal{S}} \) by replacing \( \mathcal{E} \) with \( \mathcal{S} \) literally, except that an \( \mathcal{S} \)-node labeled with \( \langle E', \sigma' \rangle \) is an \( \mathcal{S} \)-success node iff \( E' = \emptyset \), and \( \langle E', \sigma' \rangle \) is an \( \mathcal{S} \)-failure node iff \( E' \neq \emptyset \) and no \( \mathcal{S} \)-inference rule applies to \( \langle E', \sigma' \rangle \).
An \( \mathcal{E} \)-clause \( E \) to be solved defines an infinite \( \mathcal{S} \circ \mathcal{E} \)-search tree \( T^E_{\mathcal{S} \circ \mathcal{E}} \), which originates from \( T^E_{\mathcal{E}} \) by replacing each \( \mathcal{E} \)-success node \( \langle E', \sigma' \rangle \) with the \( \mathcal{S} \)-search tree \( T^S_{\mathcal{S}} \). An \( \mathcal{S} \circ \mathcal{E} \)-solution path in \( T^E_{\mathcal{S} \circ \mathcal{E}} \) is an \( \mathcal{E} \)-solution path \( p \) followed by an \( \mathcal{S} \)-solution path that starts at the \( \mathcal{E} \)-success node of path \( p \).
To disprove a conjecture \( \phi = \forall x_1 : \tau_1, \ldots, x_n : \tau_n b \), the \( \mathcal{S} \circ \mathcal{E} \)-search tree \( T^E_{\mathcal{S} \circ \mathcal{E}} \) is explored to find an \( \mathcal{S} \circ \mathcal{E} \)-solution path. In order to guarantee termination of the search, only a finite part of \( T^E_{\mathcal{S} \circ \mathcal{E}} \) may be explored. Additional side conditions for the \( \mathcal{E} \)- and \( \mathcal{S} \)-inference rules ensure that one rule does not undo another rule’s work. Rules that do not require a choice, e.g. rule (5), are preferred to those that need some choice, e.g. rules (6) and (16).
The most significant restriction in exploring \( T^E_{\mathcal{S} \circ \mathcal{E}} \) (supporting termination at the cost of completeness) comes from an additional side condition (**) of rule (3), called the paramodulation rule in [8]: Definition triples \( d := (C, \mathcal{U}, r) \in D_f \) with \( f \notin \Sigma(r) \), called non-recursive definition triples, can be applied as often
---
5 We use \( \succ \) and \( \not\succ \) in \( L_E \) only to handle calls of the predefined procedure \( \triangleright \) more efficiently by a constraint solver.
6 In our setting there is no need to assign priorities to constraints, so we can simplify the algorithm by treating all constraints as “required” constraints.
7 \( E' \in L_\perp \) is sufficient but not necessary for \( \langle E', \sigma' \rangle \) being an \( \mathcal{S} \)-failure node, as the constraint solver called by rule (19) might fail on some \( \mathcal{S} \)-clause \( E' \in L_S \setminus L_\perp \).
---
as possible. However, if \( f \in \Sigma(r) \), we need to limit the usage of \( d \). Side condition (***) demands that recursive definition triples be used at most once on each side of a literal in each branch of \( T^{E_{\{b \neq \text{false}, c\}}} \). This leads to a fast disprover that works well on simple examples, cf. Sect. 3. We call this restriction *simple paramodulation*.
To increase the deductive performance of the disprover, we can allow more applications of a recursive definition triple \( d \) by considering the “context” of \( f \)-procedure calls. Each procedure call \( f(\ldots) \) in the original formula \( \phi \) is labeled with \((N, \ldots, N) \in \mathbb{N}^k \) for a constant \( N \in \mathbb{N} \), e. g., \( N = 2 \), and \( k = |D_f| \). *Context-sensitive paramodulation* modifies side condition (***) in the following way: A recursive definition triple \( d_i := (C_i, \overline{C}_i, r_i) \in D_f \) may only be used if procedure call \( f(\ldots) \) is labeled with \((n_1, \ldots, n_k) \) such that \( n_i > 0 \). The recursive calls of \( f \) in \( r_i \) are labeled with \((n_1, \ldots, n_{i-1}, n_i - 1, n_{i+1}, \ldots, n_k) \), and the other procedure calls in \( r_i \) are labeled with \((N, \ldots, N) \). Context-sensitive paramodulation still allows only finitely many applications of rule (3), as the labels decrease with each rule application. Section 3 gives examples that illustrate the difference between these alternatives in practice. Note that simple paramodulation is *not* a special case of context-sensitive paramodulation (by setting \( N = 1 \)), because it does not distinguish between different occurrences of a procedure as context-sensitive paramodulation does.
For efficiency reasons (wrt. memory consumption), we explore \( T^{E_{\{b \neq \text{false}, c\}}} \) with a *depth-first* strategy, whereas \( T^{(E', \sigma')}_{S} \) is examined *breadth-first* to avoid infinite applications of rule (16). Two technical optimizations considerably speed up the search for an \( S \circ E \)-solution path. Firstly, caching allows to prune a branch that has already been considered in another derivation. The cache hit rates are about 20%. Secondly, while exploring \( T^{E_{\{b \neq \text{false}, c\}}} \), we can already start a subsidiary \( S \)-search on \( S \)-literals from a clause \( E \notin L_{\bot} \) (even though node \((E, \sigma) \) of \( T^{E_{\{b \neq \text{false}, c\}}} \) is not an \( E \)-success node) and feed the results back to the \( E \)-search node \((E, \sigma) \). For instance, if we derive \( x \equiv + (y) \) from \( E \), we can discard \( E \)-branches that consider the case \( x \equiv 0 \). In conjunction with *simple paramodulation*, this (empirically) leads to early failure on unsolvable examples.
### 3 Using the Disprover
In this section we illustrate the use and the performance of our disprover when it is employed as a subsystem of *VeriFun* [12]. Unless otherwise stated, we use simple paramodulation. We distinguish between conjectures provided by the user and conjectures speculated by the system.
#### 3.1 User-Provided Conjectures
Before trying to verify a program statement, it is advisable to make sure that it does not contain lapses that render it false. E. g., in arithmetic we are often interested in cancelation lemmas such as \( x^k = x^z \rightarrow y = z \). However, the disprover
finds the witness \( \{x/0, y/0, z/1\} \) falsifying the conjecture. Excluding \( x=0 \) does not help, as now the witness \( \{x/1, y/0, z/1\} \) is quickly computed. But excluding \( x=1 \) as well causes the disprover to fail, hence we are expectant that verification of \( \forall x, y, z: \mathbb{N} \ x \neq 0 \land \neg(x \neq 0 \land x^y = x^z \rightarrow y = z) \) will succeed. If we conjecture the associativity of exponentiation, \( (x^y)^z = x^{yz} \), the disprover finds the witness \( \{x/2, y/0, z/0\} \). For the injectivity conjecture of the factorial function, i.e. \( \forall x, y: \mathbb{N} \ x! = y! \rightarrow x = y \), the disprover comes up with the witness \( \{x/1, y/0\} \) and fails if we demand \( x \neq 0 \land y \neq 0 \) in addition. For \( \forall k, l: \text{list}[@A] \ k \ll l \ll k \), the solution \( \{k/0::\varepsilon, l/1::\varepsilon\} \) is computed.
All conjectures from above are disproved within less than a second.\(^8\) One might argue that these disproofs are quite simple, so they should be easy to find. \texttt{VeriFun}'s old disprover [1] basically substitutes the variables with values (or value templates like \( n::k \) for lists) of a limited size and uses a heuristic search strategy to track down a counter-example quickly if one exists. However, such a strategy does not lead to early failure on true conjectures: The old disprover fails after \( 46 \) s on the conjecture that procedure \( \text{perm} \) (deciding whether two lists are a permutation of each other) computes a symmetric relation.\(^9\) The new disprover fails after just a second.
The disprover also helps to find simple flaws in the definition of lemmas or procedures. For instance, it disproves lemma "\( \text{rev} \ll \)" (cf. Fig. 1), yielding \( \{k/0::\varepsilon, l/1::\varepsilon\} \). Also, the termination hypothesis for \( \ll \) is disproved at once if one inadvertently writes \( tl(l) \) in the recursive call of \( \ll \) (instead of \( tl(k) \)). Similar errors are the use of \( \geq \) instead of \( > \) in program conjectures and procedure definitions.
To illustrate the consequences of simple context-sensitive paramodulation, consider formula \( \forall k: \text{list}[@A] \ \text{rev}(k) = k. \) As the smallest solution is \( \{k/0::1::\varepsilon\} \), we need to open \( \text{rev} \) twice. Thus the disprover fails to find this witness with simple paramodulation, but succeeds with context-sensitive paramodulation. The same effect is observed with lemma "\( \text{rev} \ll \)" or with \( \forall x: \mathbb{N} \ x^2 > x^2 \). However, as most conjectures do not need extensive search, we prefer to save time and offer this alternative only as an option to the user who is willing to spend more time on the search for a disproof.
### 3.2 Conjectures Speculated by the System
When generalizing statements by machine, a disprover is needed to detect over-generalizations. E.g., \texttt{VeriFun}'s generalization heuristic [1] tries to generalize \( \phi = \forall k, l: \text{list}[@A] \ \text{half}([k \ll l]) = \text{half}([l \ll k]) \) to \( \phi' = \forall k, l: \text{list}[@A] \ k \ll l \ll l \ll k. \) Our disprover quickly fails on \( \phi' \) and succeeds on \( \phi'' \) (see above), hence generalization \( \phi' \) is a good candidate for a proof by induction, whereas \( \phi'' \) is recognized as an over-generalization of \( \phi. \)
\(^8\) All timing details refer to our single-threaded \texttt{Java} implementation on a 3.2 GHz hyper-threading CPU, where the \texttt{Java} VM was assigned 300 MB of main memory.
\(^9\) The old disprover examined the conjecture for lists of length \( \leq 2 \) and natural numbers between 0 and 2.
Another example of such a generate-and-test cycle is recursion elimination: For user-defined procedures, \texttt{Verifun} synthesizes so-called difference and domain procedures which represent information that is useful for automated analysis of termination [13, 17] and for proving absence of “exceptions” [18] (caused by division by 0, for example). Both kinds of procedures may contain unnecessary recursive calls, which complicate subsequent proofs. Therefore the system generates recursion elimination formulas [13] justifying a sound replacement of some recursive calls with truth values. For those formulas that the system could not prove, the user has to decide whether to support the system either by interactively constructing a proof or by giving a witness to disprove the conjecture. He can also ignore the often unreadable conjectures (which most users do), not being aware that missing a true recursion elimination formula means much more work in subsequent proofs.
For example, for the domain procedure of a tautology checker (cf. procedure \(\texttt{\neg\neg}\) in [14]), \texttt{Verifun} generates 62 recursion elimination formulas. Our disprover falsifies all of them within 33 s. Without a disprover, we wasted four times longer on futile proof attempts from which we cannot conclude anything. With the old disprover, it took more than five times longer to disprove 59 formulas; it failed on the others. For other domain or difference procedures, the disprover performs equally well, so in the vast majority of cases the user does not need to worry about recursion elimination any more. This is a tremendous improvement in user-friendliness.
4 Related Work
The problem of automatically disproving statements in the context of program verification has been tackled in various research projects.
Protzen [8] describes a calculus to disprove universal conjectures in the INKA system [4]. While it apparently performs quite well on false system-generated conjectures, it has a rather poor performance on true ones; if the input conjecture is true, it searches until it reaches an explicit limit of the search depth.
A disprover for KIV is presented in [9]. The existing proof calculus is modified so that it is able to construct disproofs. This interleaves the incremental instantiation of variables and simplifying proof steps. For solvable cases “good results” are reported, whereas performance on unsolvable problems is not communicated.
Ahrendt has developed a complete disprover for free data type specifications [2]. Since the interpretation of function symbols is left open in this loose semantics approach, one needs to consider all models satisfying the axioms when proving the non-consequence of a conjecture \(\phi\). Similarly, the ALLOY modeling system [7] can investigate properties of under-specified models. The corresponding constraint analyzer checks only models with a bounded number of elements in each primitive type, so (like our disprover) it is incomplete. Differently from these approaches, we consider only a fixed interpretation of function symbols (given by the interpreter \texttt{evalP}) in our setting.
Isabelle supports a “quickcheck” command [3] to test a conjecture by substituting random values for the variables several times. A comparison of the success rates and the performance of this approach with our results is planned as future work.
Coral [11] is a system designed to find non-trivial flaws in security protocols. It is based on the Comon-Nieuwenhuis method for proof by consistency and uses a parallel architecture for consistency checking and so-called induction derivation to ensure termination. Finding an attack on a protocol may take several hours with Coral.
5 Conclusion
In the design of our disprover we tried to minimize the time wasted on true conjectures. We achieved this by limiting the application of the paramodulation rule. Apart from this, we do not need any explicit depth limits. In particular, there is no explicit limit on the size of a witness. We also reduce the cost of simplifications by restricting them to selector and constructor calls. By incorporating a constraint solver [5] for inequalities on the predefined data type N for N, we further improved the performance.
We identified several applications of our disprover that considerably improve the productivity when working with the ĖriFun system. The main application of our disprover is bulk processing (such as recursion elimination) or automatic generalization. While it is possible to approximate completeness arbitrarily well to find deeper flaws in a program (conjecture), this would tremendously increase the time wasted on true conjectures. The advantage of our disprover is that it is successful in most solvable cases and quickly gives up in unsolvable cases, as practical experiments reveal.
In future work, we intend to investigate further heuristics for the paramodulation rule, which primarily controls the power of the disprover. We also intend to examine whether the use of verified lemmas supports the disproving process. Finally, it would be interesting to look at combinations of various disproving strategies. When we are aware of the strengths and weaknesses of different strategies, we could possibly decide beforehand which one is most suitable for a specific problem.
References
|
{"Source-Url": "https://www.inferenzsysteme.informatik.tu-darmstadt.de/media/is/publikationen/Disprover-VeriFun_Proc-DISPROVING-06.pdf", "len_cl100k_base": 9522, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 45412, "total-output-tokens": 11459, "length": "2e13", "weborganizer": {"__label__adult": 0.00035071372985839844, "__label__art_design": 0.0003857612609863281, "__label__crime_law": 0.0004799365997314453, "__label__education_jobs": 0.0007729530334472656, "__label__entertainment": 7.843971252441406e-05, "__label__fashion_beauty": 0.00016772747039794922, "__label__finance_business": 0.00025463104248046875, "__label__food_dining": 0.0004467964172363281, "__label__games": 0.0006976127624511719, "__label__hardware": 0.0009975433349609375, "__label__health": 0.0006308555603027344, "__label__history": 0.0002892017364501953, "__label__home_hobbies": 0.00011962652206420898, "__label__industrial": 0.0005679130554199219, "__label__literature": 0.00034236907958984375, "__label__politics": 0.00033974647521972656, "__label__religion": 0.0005698204040527344, "__label__science_tech": 0.0743408203125, "__label__social_life": 0.00010949373245239258, "__label__software": 0.00795745849609375, "__label__software_dev": 0.9091796875, "__label__sports_fitness": 0.0003273487091064453, "__label__transportation": 0.0006184577941894531, "__label__travel": 0.00020432472229003904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36661, 0.05022]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36661, 0.47344]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36661, 0.78732]], "google_gemma-3-12b-it_contains_pii": [[0, 277, false], [277, 2961, null], [2961, 6140, null], [6140, 11188, null], [11188, 12943, null], [12943, 16497, null], [16497, 21225, null], [21225, 24646, null], [24646, 28362, null], [28362, 31517, null], [31517, 34224, null], [34224, 36661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 277, true], [277, 2961, null], [2961, 6140, null], [6140, 11188, null], [11188, 12943, null], [12943, 16497, null], [16497, 21225, null], [21225, 24646, null], [24646, 28362, null], [28362, 31517, null], [31517, 34224, null], [34224, 36661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36661, null]], "pdf_page_numbers": [[0, 277, 1], [277, 2961, 2], [2961, 6140, 3], [6140, 11188, 4], [11188, 12943, 5], [12943, 16497, 6], [16497, 21225, 7], [21225, 24646, 8], [24646, 28362, 9], [28362, 31517, 10], [31517, 34224, 11], [34224, 36661, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36661, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
2b0284d8972d2a2e555e9352e6e52379c49d17be
|
A Comparison of Big Data Frameworks on a Layered Dataflow Model
This is a pre print version of the following article:
Original Citation:
A Comparison of Big Data Frameworks on a Layered Dataflow Model / Misale, Claudia; Drocco, Maurizio; Aldinucci, Marco; Tremblay, Guy. - In: PARALLEL PROCESSING LETTERS. - ISSN 0129-6264. - (2017), pp. 1-20.
Availability:
This version is available http://hdl.handle.net/2318/1626287 since 2017-05-15T10:38:53Z
Published version:
DOI:10.1142/S0129626417400035
Terms of use:
Open Access
Anyone can freely access the full text of works made available as "Open Access". Works made available under a Creative Commons license can be used according to the terms and conditions of said license. Use of all other works requires consent of the right holder (author or publisher) if not exempted from copyright protection by the applicable law.
(Article begins on next page)
This is the author's final version of the contribution published as:
Misale, Claudia; Drocco, Maurizio; Aldinucci, Marco; Tremblay, Guy. A Comparison of Big Data Frameworks on a Layered Dataflow Model. PARALLEL PROCESSING LETTERS. None pp: 1-20.
DOI: 10.1142/S0129626417400035
The publisher's version is available at: http://www.worldscientific.com/doi/pdf/10.1142/S0129626417400035
When citing, please refer to the published version.
Link to this full text: http://hdl.handle.net/2318/1626287
A COMPARISON OF BIG DATA FRAMEWORKS ON A LAYERED DATAFLOW MODEL
CLAUDIA MISALE, MAURIZIO DROCCO, MARCO ALDINUCCI*
Dept. of Computer Science, University of Torino, Italy
GUY TREMBLAY†
Dépt. d’informatique, Université du Québec à Montréal, Canada
Received (received date)
Revised (revised date)
Communicated by (Name of Editor)
ABSTRACT
In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models—for which only informal (and often confusing) semantics is generally provided—all share a common underlying model, namely, the Dataflow model. The model we propose shows how various tools share the same expressiveness at different levels of abstraction. The contribution of this work is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm), thus making it easier to understand high-level data-processing applications written in such frameworks. Second, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level.
Keywords: data processing, streaming, dataflow, skeletons, functional programming, semantics
1. Outline
With the increasing number of Big Data analytics tools, we witness a continuous fight among implementors/vendors in demonstrating how their tools are better than others in terms of performances or expressiveness. In this hype, for a user approaching Big Data analytics (even an educated computer scientist), it might be difficult to have a clear picture of the programming model underneath these tools and the expressiveness they provide to solve some user defined problem. With this in mind,
*{misale, drocco, aldinuc}@di.unito.it
†tremblay.guy@uqam.ca
we wanted to understand the features those tools provide to the user in terms of
API and how they were related to parallel computing paradigms.
To provide some order in the world of Big Data processing, in this paper we
categorize some models and tools to identify their programming model’s common
features. We identified the Dataflow model [12] as the common model that better
describes all levels of abstraction, from the user-level API to the execution model.
This model represents applications as a directed graph of actors. In its “modern”
reissue (aka. macro-data flow [2]), it naturally models independent (thus paralleliz-
able) kernels starting from a graph of true data dependencies, where a kernel’s
execution is triggered by data availability.
The Dataflow model is expressive enough to describe batch, micro-batch and
streaming models that are implemented in most tools for Big Data processing. Being
all realized under the same common idea, we show how various Big Data analytics
tools share almost the same base concepts, differing mostly in their implementa-
tion choices. We instantiate the Dataflow model into a stack of layers where each
layer represents a dataflow graph/model with a different meaning, describing a pro-
gram from what the programmer sees down to the underlying, lower-level, execution
model layer. Furthermore, we put our attention to a problem arising from the high
abstraction provided by the model that reflects into the examined tools. Especially
when considering stream processing and state management, non-determinism may
arise when processing one or more streams in one node of the graph, a well-known
problem in parallel and distributed computing. Finally, the paper also focuses on
high-level parallelism exploitation paradigms and the correlation with Big Data
tools at the level of programming and execution models.
In this paper, we examine the following tools from a Dataflow perspective:
Spark [15], Storm [13], Flink [9], and TensorFlow [1]. We focus only on those tools
since they are among the most famous and used ones nowadays. As far as we know,
no previous attempt has been made to compare different Big Data processing tools,
at multiple levels of abstraction, under a common formalism.
The paper proceeds as follows. Section 2 describes the Dataflow model and
how it can be exploited at three different abstraction levels. Section 3 focuses on
user-level API of the tools. The various levels of our layered model are discussed
in Sections 4, 5 and 6. Then, Section 7 discusses some limitations of the dataflow
model in capturing all the tools’ features. Finally, Section 8 concludes the paper
and describes some future work.
2. The Dataflow Layered Model
By analyzing some well-known tools—Spark, Storm, Flink, and TensorFlow—we
identified a common structure underlying all of them, based on the Dataflow model.
In Section 2.1 we review the Dataflow model of computation, as presented by Lee
and Parks [12]. In Section 2.2, we outline an architecture that can describe all
these models at different levels of abstraction (see Fig. 1) from the (top) user-level
API to the (bottom-level) actual network of processes. In particular, we show how the Dataflow model is general enough to subsume many different levels only by changing the semantics of actors and channels.
2.1. The Dataflow Model
Dataflow Process Networks are a special case of Kahn Process Networks, a model of computation that describes a program as a set of concurrent processes communicating with each other via FIFO channels, where reads are blocking and writes are non-blocking [11]. In a Dataflow process network, a set of firing rules is associated with each process, called actor. Processing then consists of “repeated firings of actors”, where an actor represents a functional unit of computation over tokens. For an actor, to be functional means that firings have no side effects—thus functional actors are stateless—and the output tokens are functions of the input tokens. The model can also be extended to allow stateful actors.
A Dataflow network can be executed mainly using two approaches, namely process-based and scheduling-based—other models are flavors of these two. The process-based model is straightforward: each actor is represented by a process and different processes communicate via FIFO channels. In the scheduling-based model—also known as dynamic scheduling—a scheduler tracks the availability of tokens in input to actors and schedules enabled actors for execution; the atomic unit being scheduled is referred as a task and represents the computation performed by an actor over a single set of input tokens.
Actors A Dataflow actor consumes input tokens when it “fires” and then produces output tokens; thus it repeatedly fires on tokens arriving from one or more streams. The function mapping input to output tokens is called the kernel of an actor. The Dataflow Process Network model also seamlessly comprehends the Macro Dataflow parallel execution model, in which each process executes arbitrary code. Conversely, an actor’s code in a classical Dataflow architecture model is typically a single machine instruction.
A firing rule defines when an actor can fire. Each rule defines what tokens have to be available for the actor to fire. In the basic model, one token from each input channel must be available in order to enable one firing of the actor (i.e., from-all input policy). Multiple rules can be combined to program arbitrarily complex firing logics (e.g., the If node).
Input channels The kernel function takes as input one or more tokens from one or more input channels when a firing rule is activated. The basic model can be extended to allow for testing input channels for emptiness, to express arbitrary stream consuming policies (e.g., gathering from any channel: cf. Section 7).
Output channels The kernel function places one or more tokens into one or more output channels when a firing rule is activated. Each output token produced by a firing can be replicated and placed onto each output channel (i.e., broadcasting)
or sent to specific channels, in order to model arbitrarily producing policies (e.g., switch, scatter).
**Stateful actors** Actors with state can be considered like objects (instead of functions) with methods used to modify the object’s internal state. Stateful actors is an extension that allows side effects over local (i.e., internal to each actor) states. As shown by Lee and Sparks [12], stateful actors can be emulated in the stateless Dataflow model by adding an extra feedback channel carrying the value of the state to the next execution of the kernel function on the next element of the stream and by defining appropriate firing rules.
### 2.2. The Dataflow Stack

The layered model shown in Fig. 1 presents five layers, where the three intermediate layers are Dataflow models with different semantics, as described in the paragraphs below. Underneath these three layers is the Platform level, that is, the runtime or programming language used to implement a given framework (e.g., Java and Scala in Spark), a level which is beyond the scope of our paper. On top is the Framework API level, that describes the user API on top of the Dataflow graph, which will be detailed in Section 3. The three Dataflow models in between are as follows.
- **Program Semantics Dataflow**: We claim the API exposed by any of the considered frameworks can be translated into a Dataflow graph. The top level of our layered model captures this translation: programs at this level represent the *semantics* of data-processing applications in terms of Dataflow graphs. Programs at this level do not explicitly express any form of parallelism: they only express data
dependencies (i.e., edges) among program components (i.e., actors). This aspect is covered in Section 4.
- **Parallel Execution Dataflow**: This level, covered in Section 5, represents an instantiation of the semantic dataflows in terms of processing elements (i.e., actors) connected by data channels (i.e., edges). Independent units—not connected by a channel—may execute in parallel. For example, a semantic actor can be replicated to express *data parallelism*, the execution model in which a given function is applied to independent input data.
- **Process Network Dataflow**: This level, covered in Section 6, describes how the program is effectively deployed and executed onto the underlying platform. Actors are concrete computing entities (e.g., processes) and edges are communication channels. The most common approach—used by all the considered frameworks but TensorFlow—is for the actual network to be a Master-Workers task executor. In TensorFlow, processing elements are effectively mapped to threads and possibly distributed over multiple nodes of a cluster.
### 3. The Frameworks’ User APIs
Data-processing applications are generally divided into *batch* vs. *stream* processing. Batch programs process one or more finite datasets to produce a resulting finite output dataset, whereas stream programs process possibly unbounded sequences of data, called *streams*, doing so in an incremental manner. Operations over streams may also have to respect a total data ordering—for instance, to represent time ordering.
Orthogonally, we divide the frameworks’ user APIs into two categories: *declarative* and *topological*. Spark, Flink, and TensorFlow belong to the first category—they provide batch or stream processing in the form of operators over collections or streams—whereas Storm belong to the second one—it provides an API explicitly based on building graphs.
#### 3.1. Declarative Data Processing
A declarative data processing model provides as building blocks data collections and operations on those collections. The data model follows domain-specific operators, for instance, relational algebra operators that operate on data structured with the key-value model.
*Declarative batch processing* applications are expressed as methods on objects representing collections (Spark and Flink) or as functions on values (*tensors*, in TensorFlow): these are algebras on finite datasets, whose data can be ordered (as in tensors) or not (as in Spark/Flink multisets). APIs with such operations are exposing a functional-like style. Here are three examples of operations with their
(multiset-based) semantics:
\[
\text{groupByKey}(a) = \{(k, \{v : (k,v) \in a\})\}
\]
\[
\text{join}(a,b) = \{(k, (v_a, v_b)) : (k,v_a) \in a \land (k,v_b) \in b\}
\]
\[
\text{map}(f)(a) = \{f(v) : v \in a\}
\]
The groupByKey unary operation groups tuples sharing the same key (i.e., the first field of the tuple); thus it maps multisets of type \((K \times V)^*\) to multisets of type \(((K \times V^*)^*)^*\). The binary join operation merges two multisets by coupling values sharing the same key. Finally, the unary higher-order map operation applies the kernel function \(f\) to each element in the input multiset.
Declarative stream processing programs are expressed in terms of an algebra on eventually unbounded data (i.e., stream as a whole) where data ordering eventually matters. Data is usually organized in tuples having a key field used, for example, to express the position of each stream item with respect to a global order—a global timestamp—or to partition streams into substreams. For instance, this allows expressing relational algebra operators and data grouping. In a stream processing scenario, we also have to consider two important aspects: state and windowing; those are discussed in Section 3.3.
Apache Spark implements batch programming with a set of operators, called transformations, that are uniformly applied to whole datasets called Resilient Distributed Datasets (RDD) [15], which are immutable multisets. For stream processing, Spark implements an extension through the Spark Streaming module, providing a high-level abstraction called discretized stream or DStream [16]. Such streams represent results in continuous sequences of RDDs of the same type, called micro-batch. Operations over DStreams are “forwarded” to each RDD in the DStream, thus the semantics of operations over streams is defined in terms of batch processing according to the simple translation \(\text{op}(a) = [\text{op}(a_1), \text{op}(a_2), \ldots]\), where \([\cdot]\) refers to a possibly unbounded ordered sequence, \(a = [a_1, a_2, \ldots]\) is a DStream, and each item \(a_i\) is a micro-batch of type RDD.
Listing in Fig. 2 shows code for the simple Word Count example in Spark—the “Hello World!” example for Big Data. A collection (RDD) of words is first created by scanning a text file and splitting each line into its constituent words. Each word \(w\) is then paired (Tuple2) with 1, to indicate one occurrence of that word, generating the pairs RDD. All the 1s for a given word are then combined together, and reduced using addition, to obtain RDD counts, whose result is then saved as a text file.
Apache Flink’s main focus is on stream programming. The abstraction used is the DataStream, which is a representation of a stream as a single object. Operations are composed (i.e, pipelined) by calling operators on DataStream objects. Flink also provides the DataSet type for batch applications, that identifies a single immutable multiset—a stream of one element. A Flink program, either for stream or batch
*Here, \(\cdot\) denotes multisets rather than sets.*
```java
sc.textFile("hdfs://...");
JavaRDD<String> words =
textFile.flatMap(new FlatMapFunction<String, String>() {
public Iterable<String> call(String s) {
return Arrays.asList(s.split(" "));
}
};
JavaPairRDD<String, Integer> pairs =
words.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
};
JavaPairRDD<String, Integer> counts =
pairs.reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer a, Integer b) {
return a + b;
}
};
counts.saveAsTextFile("hdfs://...");
```
Fig. 2. Word Count example in Spark.
```java
public static void main(String[] args) throws Exception {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet<String> text = env.fromElements("Text...");
DataSet<Tuple2<String, Integer>> wordCounts =
text.flatMap(new LineSplitter())
.groupBy(0)
.sum(1);
wordCounts.print();
}
public static class LineSplitter
implements FlatMapFunction<String, Tuple2<String, Integer>> {
@Override
public void flatMap(String line, Collector<Tuple2<String, Integer>> out) {
for (String word : line.split(" ")) {
out.collect(new Tuple2<String, Integer>(word, 1));
}
}
}
```
Fig. 3. Word Count example in Flink.
processing, is a term from an algebra of operators over DataStreams or DataSets, respectively. Stateful stream operators and iterative batch processing are discussed in Section 3.3.
Listing in Fig. 3 shows Flink’s code for the Word Count example. The text DataSet is the sequence of lines from some text file. The resulting wordCounts DataSet, again consisting of words paired with their number of occurrences (Tuple2<String, Integer>), is obtained by splitting each line into words paired with 1s (flatMap with LineSplitter), then grouped by word (0th component of pair) and summed over the various 1s (1st component of pair).
Google TensorFlow is a framework specifically designed for machine learning applications, where the data model consists of multidimensional arrays called tensors and a program is a composition of operators processing tensors. A TensorFlow application is built as a functional-style expression, where each sub-expression can be given an explicit name. The TensorFlow programming model includes control flow operations and, notably, synchronization primitives (e.g., MutexAcquire/MutexRelease for critical sections). This latter observation implies TensorFlow exposes the underlying (parallel) execution model to the user which has to program the eventual coordination of operators concurring over some global state. Because of space limitation, we do not provide a TensorFlow Word Count example.
3.2. Topological Data Processing
Topological programs are expressed as graphs, built by explicitly connecting processing nodes and specifying the code executed by nodes.
Apache Storm is a framework that only targets stream processing. Storm’s programming model is based on three key notions: Spouts, Bolts, and Topologies. A Spout is a source of a stream, which is (typically) connected to a data source or that can generate its own stream. A Bolt is a processing element, so it processes any number of input streams and produces any number of new output streams. Most of the logic of a computation goes into Bolts, such as functions, filters, streaming joins or streaming aggregations. A Topology is the composition of Spouts and Bolts resulting in a network. Storm uses tuples as its data model, i.e., named lists of values of arbitrary type. Hence, Bolts are parametrized with per-tuple kernel code. Each time a tuple is available from some input stream, the kernel code gets activated to process that input tuple. Bolts and Spouts are locally stateful, as we discuss in Section 3.3, while no global consistent state is supported. Yet, globally stateful computations can be implemented since the kernel code of Spouts and Bolts is arbitrary. However, eventual global state management would be the sole responsibility of the user, who has to be aware of the underlying execution model in order ensure program coordination among Spouts and Bolts. It is also possible to define cyclic graphs by way of feedback channels connecting Bolts.
While Storm targets single-tuple granularity in its base interface, the Trident API is an abstraction that provides declarative stream processing on top of Storm. Namely, Trident processes streams as a series of micro-batches belonging to a stream considered as a single object.
Listing in Fig. 4 shows Storm’s code for the Word Count example. A key element is the WordCount bolt execute method. Each call to execute receives a Tuple, a word. The bolt keeps track of the number of occurrences of each word using the counts Map (lines 19, 28), and emits that word paired with its current count (29)—thus generating a stream of incremental number of occurrences. The spout (random sentences) and bolts (sentence splitter, word counting) are created and connected in the main method.
public static class SplitSentence extends ShellBolt implements IRichBolt {
public SplitSentence() {
super("python", "splitsentence.py");
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
public static class WordCount extends BaseBasicBolt {
Map<String, Integer> counts = new HashMap<String, Integer>();
public void execute(Tuple tuple, BasicOutputCollector collector) {
String word = tuple.getString(0);
Integer count = counts.get(word);
if (count == null)
count = 0;
count++;
counts.put(word, count);
collector.emit(new Values(word, count));
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
}
public static void main(String[] args) throws Exception {
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new RandomSentenceSpout(), 5);
builder.setBolt("split", new SplitSentence(), 8).shuffleGrouping("spout");
Config conf = new Config();
conf.setDebug(true);
conf.setNumWorkers(3);
StormSubmitter.submitTopology(args[0], conf, builder.createTopology());
}
Fig. 4. Word Count example in Storm.
3.3. State, Windowing and Iterative Computations
Frameworks providing stateful stream processing make it possible to express modifications (i.e., side-effects) to the system state that will be visible at some future point. If the state of the system is global, then it can be accessed by all system components. For example, TensorFlow mutable variables are a form of global state, since they can be attached to any processing node. On the other hand, local states can be accessed only by a single system component. For example, the mapWithState functional in the Spark Streaming API realizes a form of local state, in which successive executions of the functional see the modifications to the state made by previous
ones. Furthermore, state can be partitioned by shaping it as a tuple space, following, for instance, the aforementioned key-value paradigm. With the exception of TensorFlow, all the considered frameworks provide local key-value states.
Windowing is another concept provided by many stream processing frameworks. A window is informally defined as an ordered subset of items extracted from the stream. The most common form of windowing is referred as a sliding window, characterized by its size (how many elements fall within the window) and sliding policy (how items enter and exit from the window). Spark provides the simplest abstraction for defining windows, since they are just micro-batches over the DStream abstraction, where only the window size and sliding policy can be specified. Storm and Flink allow more arbitrary kinds of grouping, producing windows of Tuples and WindowedStreams, respectively. Note that this does not break the declarative or topological nature of the considered frameworks, since it only changes the type of the processed data. Note also that windowing can be expressed in terms of stateful processing, by considering window-typed state.
Finally, we consider another common concept in batch processing, namely iterative processing. In Flink, iterations are expressed as the composition of arbitrary DataSet values by iterative operators, resulting in a so-called IterativeDataSet. Component DataSets represent for example step functions—executed in each iteration—or termination condition—evaluated to decide if iteration has to be terminated. Spark’s iteration model is radically simpler, since no specific construct is provided to implement iterative processing. Instead, an RDD (endowed with transformations) can be embedded into a plain sequential loop. Finally, TensorFlow allows expressing conditionals and loops by means of specific control flow operators such as For, similarly to Flink.
4. Program Semantics Dataflow
The Program Semantics Dataflow level of our layered model provides a representation of the program in terms of the Dataflow model. Such a model describes the application using operators and data dependencies among them, thus creating a topological view common to all frameworks. This level does not explicitly express parallelism: instead, parallelism is implicit through the data dependencies among actors (i.e., among operators), so that operators which have no direct or indirect dependencies can be executed concurrently.
4.1. Semantic Dataflow Graphs
A semantic Dataflow graph is a pair $G = (V, E)$ where actors $V$ represent operators, channels $E$ represent data dependencies among operators and tokens represent data to be processed. For instance, consider a map function $m$ followed by a reduce function $r$ on a collection $A$ and its result $b$, represented as the functional composition $b = r(m(A))$. This is represented by the graph in Fig. 5, which represents the semantic dataflow of a simple map-reduce program. Note that the user program
translation into the semantic dataflow can be subject to further optimization. For instance, two or more non-intensive kernels can be mapped onto the same actor to reduce resource usage.
Fig. 5. Functional Map and Reduce dataflow expressing data dependencies.
Fig. 6. Spark DAG of the WordCount application (a). A Flink JobGraph (b). A TensorFlow application graph, adapted from [1] (c).
Notably, the Dataflow representation we propose is adopted by the considered frameworks as a pictorial representation of applications. Fig. 6(a) shows the semantic dataflow—called application DAG in Spark—related to the WordCount application, having as operations (in order): 1. read from text file; 2. a flatMap operator splitting the file into words; 3. a map operator that maps each word into a key-value pair \((w, 1)\); 4. a reduceByKey operator that counts occurrences of each word in the input file. The DAG is grouped into stages (namely, Stages 0 and 1), which divide map and reduce phases. This distinction is related to the underlying parallel execution model and will be covered in Section 5. Flink also provides a semantic representation—called JobGraph or condensed view—of the application, consisting of operators (JobVertex) and intermediate results (IntermediateDataSet, representing data dependencies among operators). Fig. 6(b) presents a small example of a JobGraph. Finally, Fig. 6(c) is a TensorFlow example (adapted from [1]). A node represents a tensor operation, which can be also a data generation node (e.g., \(W\), \(b\), \(x\)). Each node has firing rules that depend on the kind of incoming tokens. For example, control dependencies edges can carry synchronization tokens: the target
node of such edges cannot execute until all appropriate synchronization signals have been received.
4.2. Tokens and Actors Semantics
Although the frameworks provide a similar semantic expressiveness, some differences are visible regarding the meaning of tokens flowing across channels and how many times actors are activated.
When mapping a Spark program, tokens represent RDDs and DStreams for batch and stream processing respectively. Actors are operators—either transformations or actions in Spark nomenclature—that transform data or return values (in-memory collection or files). Actors are activated only once in both batch and stream processing, since each collection (either RDD or DStreams) is represented by a single token. For Flink, the approach is similar: actors are activated only once in all scenarios except in iterative algorithms—see Sect. 4.3. Tokens represent DataSets and DataStreams that identify whole datasets and streams respectively. For TensorFlow, the same mapping holds: operators are mapped to actors that take as input single tokens representing Tensors (multi-dimensional arrays). Actors are activated once except for iterative computations, as in Flink. Storm is different since a token represents a single stream item (Tuple). Consequently, actors, representing (macro) dataflow operators, are activated each time a new token is available.
From the discussion above, we can note that Storm’s actors follow a from-any policy for consuming input tokens, while the other frameworks follow a from-all policy as in the basic Dataflow model. In all the considered frameworks, output tokens are broadcast onto all channels going out of a node.
4.3. Semantics of State, Windowing and Iterations
In Section 3.3, we introduced stateful, windowing and iterative processing as convenient tools provided by the considered frameworks.
From a Dataflow perspective, stateful actors represent an extension to the basic model—as sketched in Section 2.1—only in case of global state. In particular, globally-stateful processing breaks the functional nature of the basic Dataflow model, inhibiting for instance to reason in pure functional terms about program semantics (cf. Section 7). Conversely, locally-stateful processing can be emulated in terms of the pure Dataflow model. We remark that at the semantic level, capturing stateful processing within declarative models requires no modifications to the proposed Dataflow model, since this aspect is embedded into the semantics of each operation.
For instance, consider the semantics of a generic mapWithState functional. This functional is parametrized by the binary kernel \( f : T \times S \rightarrow U \times S \), that takes as input an item to be processed \((a_i \in T)\) in addition to the state \((s_i \in S)\), and then produces an output item in addition to a new state. Let \(s_0\) be the initial state and \(a_i\) be the \(i^{th}\) item from an arbitrary ordering of collection \(a\). The semantics of the
generic invocation of the kernel, with value \( s_i \) for the state, can then be defined as follows, for \( i \geq 1 \):
\[
y_i = f(a_{i-1}, s_{i-1}) \\
\]
\[
s_i = \pi_2(y_i)
\]
The semantics of the stateful functional, where \( \Pi_1 \) is the left projection over a whole collection, is then the following:
\[
\text{mapWithState}(f)(a) = \Pi_1 \left( \bigcup y_i \right)
\]
The above semantics is clearly non-deterministic since it depends on the ordering choice. A similar formulation also holds for partitioned states, but in that case the binary kernel takes as input a subset of the state (i.e., the portion bound with the respective key); equivalently, it produces an update for the same subset of the state.
Moreover, windowing is not a proper extension since windows can be stored within each actor’s local state [8]. However, the considered frameworks treat windowing as a primitive concept. This can be easily mapped to the Dataflow domain by just considering tokens of proper types.
Finally, iterations can be modeled by inserting loops in semantic dataflows. In this case, each actor involved in an iteration is activated each time a new token is available and the termination condition is not met. This implementation of iterative computations is similar to the hierarchical actors of Lee & Parks [12], used to encapsulate subgraphs modeling iterative algorithms.
5. Parallel Execution Dataflow
The Parallel Execution Dataflow level represents parallel implementations of semantic dataflows. As in the previous section, we start by introducing the approach and then we describe how the various frameworks instantiate it and what are the consequences this brings to the runtime.
The most straightforward source of parallelism comes directly from the Dataflow model, namely, independent actors can run in parallel. Furthermore, some actors can be replicated to increase parallelism by making replicas work over a partition of the input data—that is, by exploiting full data parallelism. This is the case, for instance, of the map operator described in Section 3.1. Both the above schemas are referred as embarrassingly parallel processing, since there are no dependencies among actors.
Note that introducing data parallelism requires partitioning input tokens into sub-tokens, distributing those to the various worker replicas, and then aggregating the resulting sub-tokens into an appropriate result token—much like scatter/gather operations in message passing programs. Finally, in case of dependent actors that are activated multiple times, parallelism can still be exploited by letting tokens “flow” as soon as each activation is completed. This well-known schema is referred as stream/pipeline parallelism.
Figure 7 shows a parallel execution dataflow for the MapReduce semantic dataflow from Fig. 5. In this example, the dataset $A$ is divided in 8 independent partitions and the map function $m$ is executed by 8 actor replicas; the reduce phase is then executed in parallel by actors enabled by the incoming tokens (namely, the results) from their “producer” actors.
Spark identifies its parallel execution dataflow by a DAG such as the one shown in Fig. 8(a), which is the input of the DAG Scheduler entity. This graph illustrates two main aspects: first, the fact that many parallel instances of actors are created for each function and, second, the actors are grouped into Stages that are executed in parallel if and only if there is no dependency among them. Stages can be considered as the hierarchical actors in [12]. Grouping actors in stages brings another consequence, derived from the Spark runtime implementation: each stage that depends on some previous stages has to wait for their completion before execution. The depicted behavior is analogous to the one encountered in the Bulk
Synchronous Parallelism paradigm (BSP) [14]. In a BSP algorithm, as well as in a Spark application, a computation proceeds in a series of global supersteps consisting in: 1) Concurrent computation, in which each actor executes its business code on its own partition of data; 2) Communication, where actors exchange data between themselves if necessary (the shuffle phase); 3) Barrier synchronization, where actors wait until all other actors have reached the same barrier.
Flink transforms a JobGraph (e.g., Fig. 6(b)) into an ExecutionGraph [6] (e.g., Fig. 8(b)), in which the JobVertex (a hierarchical actor) is an abstract vertex containing ExecutionVertexes (actors), one per parallel sub-task. A key difference compared to the Spark execution graph is that a dependency does not represent a barrier among actors or hierarchical actors: instead, there is effective tokens pipelining, and thus actors can be fired concurrently. This is a natural implementation for stream processing, but in this case, since the runtime is the same, it applies to batch processing applications as well. Conversely, iterative processing is implemented according to the BSP approach: one evaluation of the step function on all parallel instances forms a superstep (again a hierarchical actor), which is also the granularity of synchronization; all parallel tasks of an iteration need to complete the superstep before the next one is initiated, thus behaving like a barrier between iterations.
TensorFlow replicates actors implementing certain operators (e.g., tensor multiplication) on tensors (input tokens). Hence, each actor is a data-parallel actor operating on intra-task independent input elements—here, multi-dimensional arrays (tensors). Moreover, iterative actors/hierarchical actors (in case of cycles on a subgraph) are implemented with tags similar to the MIT Tagged-Token dataflow machine [4], where the iteration state is identified by a tag and independent iterations are executed in parallel. It is interesting to note that TensorFlow differs from Flink in the execution of iterative actors: in TensorFlow an input can enter a loop iteration whenever it becomes available, while Flink imposes a barrier after each iteration.
Storm creates an environment for the execution dataflow similar to the other frameworks. Each actor is replicated to increase the inter-actor parallelism and each group of replicas is identified by the name of the Bolt/Spout of the semantics dataflow they originally belong to, thus instantiating a hierarchical actor. Each of these actors (actors group) represents data parallel tasks without dependencies. Since Storm is a stream processing framework, pipeline parallelism is exploited. Hence, while an actor is processing a token (tuple), an upstream actor can process the next token concurrently, increasing both data parallelism within each actors group and task parallelism among groups.
Summarizing, in Sections 4 and 5, we showed how the considered frameworks can be compared through the lens of the very same model from both a semantic and a parallel implementation perspective. The comparison is summarized in Table 1 and Table 2 for batch and streaming processing, respectively.
Table 1. Batch processing.
<table>
<thead>
<tr>
<th>Graph specification</th>
<th>Spark</th>
<th>Flink</th>
<th>TensorFlow</th>
</tr>
</thead>
<tbody>
<tr>
<td>Implicit, OO-style chaining of transformations</td>
<td>Implicit, OO-style chaining of transformations</td>
<td>Implicit, Prefix operator with arguments</td>
<td></td>
</tr>
<tr>
<td>DAG</td>
<td>Join operation</td>
<td>Join operation</td>
<td>N-ary operators and/or results</td>
</tr>
<tr>
<td>Tokens</td>
<td>RDD</td>
<td>DataSet</td>
<td>Tensor</td>
</tr>
<tr>
<td>Nodes</td>
<td>Transformations from RDD to RDD</td>
<td>Transformations from DataSet to DataSet</td>
<td>Transformations from Tensor to Tensor</td>
</tr>
<tr>
<td>Parallelism</td>
<td>Data parallelism in transformations + Inter-actor, task parallelism, limited by per-stage BSP</td>
<td>Data parallelism in transformations + Inter-actor task parallelism</td>
<td>Data parallelism in transformations + Inter-actor task parallelism + Loop parallelism</td>
</tr>
<tr>
<td>Iteration</td>
<td>Using repetitive sequential executions of the graph</td>
<td>Using iterate & iterateDelta</td>
<td>Using control flow constructs</td>
</tr>
</tbody>
</table>
Table 2. Stream processing.
<table>
<thead>
<tr>
<th>Graph specification</th>
<th>Spark</th>
<th>Flink</th>
<th>Storm</th>
</tr>
</thead>
<tbody>
<tr>
<td>Implicit, OO-style chaining of transformations</td>
<td>Implicit, OO-style chaining of transformations</td>
<td>Explicit, Connections between bolts</td>
<td></td>
</tr>
<tr>
<td>DAG</td>
<td>Join operation</td>
<td>Join operation</td>
<td>Multiple incoming/outgoing connections</td>
</tr>
<tr>
<td>Tokens</td>
<td>DStream</td>
<td>DataStream</td>
<td>Tuple (fine-grain)</td>
</tr>
<tr>
<td>Nodes</td>
<td>Transformations from DStream to DStream</td>
<td>Transformations from DataStream to DataStream</td>
<td>Stateful with “arbitrary” emission of output tuples</td>
</tr>
<tr>
<td>Parallelism</td>
<td>Analogous to Spark Batch parallelism</td>
<td>Analogous to Flink Batch parallelism + Stream parallelism between stream items</td>
<td>Data parallelism between different bolt instances + Stream parallelism between stream items by bolts</td>
</tr>
</tbody>
</table>
6. Dataflow Process Network
The Process Network layer shows how the program is effectively executed, following the process and scheduling-based categorization described earlier (Sect. 2.1).
6.1. Scheduling-based Execution
In Spark, Flink and Storm, the resulting process network dataflow follows the Master-Workers pattern, where actors from previous layers are transformed into
tasks. Fig. 9(a) shows a representation of the Spark Master-Workers runtime. We will use this structure also to examine Storm and Flink, since the pattern is similar for them: they differ only in how tasks are distributed among workers and how the inter/intra-communication between actors is managed.

(a) Master-Workers
(b) Worker hierarchy
Fig. 9. Master-Workers structure of the Spark runtime (a) and Worker hierarchy example in Storm (b).
**The Master** has total control over program execution, job scheduling, communications, failure management, resource allocations, etc. The master is the entity that knows the semantic dataflow representing the current application, while workers are completely agnostic about the whole dataflow: they only obtain tasks to execute, that represent actors of the execution dataflow the master is running. It is only when the execution is effectively launched that the semantic dataflow is built and eventually optimized to obtain the best execution plan (Flink). With this postponed evaluation, the master creates what we called the parallel execution dataflow to be executed. In Storm and Flink, the data distribution is managed in a decentralized manner, i.e., it is delegated to each executor, since they use pipelined data transfers and forward tokens as soon as they are produced. In Spark streaming, the master is responsible for data distribution: it discretizes the stream into micro-batches that are buffered into workers’ memory. The master generally keeps track of distributed tasks, decides when to schedule the next tasks, reacts to finished vs. failed tasks, keeps track of the semantic dataflow progress, and orchestrates collective communications and data exchange among workers. This last aspect is crucial when executing **shuffle operations**, which entail data exchanges among executors. Whereas workers do not have any information about others, to exchange data they have to request information to the master and, moreover, specify they are ready to send/receive data.
**Workers** are nodes executing the actor logic, namely, a worker node is a process in the cluster. Within a worker, a certain number of parallel executors is instantiated, that execute tasks related to the given application. Workers have no information
about the dataflow at any level since they are scheduled by the master. Despite this, the different frameworks use different nomenclatures: in Spark, Storm and Flink cluster nodes are decomposed into **Workers**, **Executors** and **Tasks**. A Worker is a process in a node of the cluster, e.g., a Spark worker instance. A node may host multiple Worker instances. An Executor is a thread that is spawned in a Worker process and it executes Tasks, which are the actual kernel of an actor of the dataflow. Fig. 9(b) illustrates this structure in Storm, an example that would also be valid for Spark and Flink.
### 6.2. Process-based Execution
In TensorFlow, actors are effectively mapped to threads and possibly distributed on different nodes. The cardinality of the semantic dataflow is preserved, as each actor node is instantiated into one node, and the allocation is decided using a placement algorithm based on a cost model. The dataflow is distributed on cluster nodes and each node/Worker may host one or more dataflow actors/Tasks, that internally implement data parallelism with a pool of threads/Executors working on Tensors. Communication among actors is done using the send/receive paradigm, allowing workers to manage their own data movement or to receive data without involving the master node, thus decentralizing the logic and the execution of the application.
### 7. Limitations of the Dataflow Model
Reasoning about programs using the Dataflow model is attractive since it makes the program semantics independent from the underlying execution model. In particular, it abstracts away any form of parallelism due to its pure functional nature. The most relevant consequence, as discussed in many theoretical works about Kahn Process Network and similar models—such as Dataflow—is the fact that all computations are deterministic.
Conversely, many parallel runtime systems exploit nondeterministic behaviors to provide efficient implementations. For example, consider the Master-Workers pattern discussed in Section 6. A naive implementation of the Master node distributes tasks to $N$ Workers according to a round-robin policy—task $i$ goes to worker $i \mod N$—which leads to a deterministic process. An alternative policy, generally referred as *on-demand*, distributes tasks by considering the load level of each worker, for example, to implement a form of load balancing. The resulting processes are clearly nondeterministic, since the mapping from tasks to workers depends on the relative service times.
Non-determinism can be encountered at all levels of our layered model in Fig. 1. For example, actors in Storm’s topologies consume tokens from incoming streams according to a from-any policy—process a token from any non-empty input channel—thus no assumption can be made about the order in which stream tokens are processed. More generally, the semantics of stateful streaming programs depends on the order in which stream items are processed, which is not specified by the semantics of
the semantic dataflow actors in Section 4. As a consequence, this prevents from reasoning in purely Dataflow—i.e., functional—terms about programs in which actor nodes include arbitrary code in some imperative language (e.g., shared variables).
8. Conclusion
In this paper, we showed how the Dataflow model can be used to describe Big Data analytics tools, from the lowest level—process execution model—to the highest one—semantic Dataflow. The Dataflow model is expressive enough to represent computations in terms of batch, micro-batch and stream processing. With this abstraction, we showed that Big Data analytics tools have similar expressiveness at all levels and we proceeded with the description of a layered model capturing different levels of Big Data applications, from the program semantics to the execution model. We also provided an overview of some well-known tools—Spark, Flink, Storm and TensorFlow—by analyzing their semantics and mapping them to the proposed Dataflow-based layered model. With this work, we aim at giving users a general model to understand the levels underlying all the analyzed tools.
The need to exploit parallel computing at a high enough level of abstraction certainly predates the advent (or the “hype”) of Big Data processing. In the parallel computing and software engineering communities, this need has been advocated years before by way of algorithmic skeletons [7] and design patterns [10], which share many of the principles underlying the high-level frameworks considered in previous sections. Conceptually, the tools we discussed through the paper exploit Data Parallelism, Stream Parallelism, or both.
Data Parallel patterns express computations in which the same kernel function is applied to all items of a data collection, which include for instance Map and Reduce. They can be viewed as higher-order functions and can be placed at the very top of our layered model from Fig. 1, since they expose a declarative data processing model (Section 3.1).
Stream Parallel patterns express computations in which data streams flow through a network of processing units. It is another key parallelism exploitation pattern, from the first high-level approaches to parallel computing, such as the P3L language [5], to more recent frameworks, such as FastFlow [3]. This model, enriched with Control-Parallel patterns such as If and While, allows to express programs through arbitrary graphs, where vertexes are processing units and edges are network links. In this setting, Stream Parallel patterns represent pre-built, nestable graphs, therefore they expose a topological data processing model (Section 3.2).
As future work, we plan to implement a model of Big Data analytics tools based on algorithmic skeletons, on top of the FastFlow library [3], exploiting both forms of parallelism.
Acknowledgements This work was partly supported by the EU-funded project TOREADOR (contract no. H2020-688797), the EU-funded project Rephrase (contract no. H2020-644235), and the 2015–2016 IBM Ph.D. Scholarship program. We
gratefully acknowledge Prof. Domenico Talia for his comments on the early version of the manuscript.
References
|
{"Source-Url": "https://iris.unito.it/retrieve/handle/2318/1626287/303421/preprintPPL_4aperto.pdf", "len_cl100k_base": 10592, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 50030, "total-output-tokens": 12529, "length": "2e13", "weborganizer": {"__label__adult": 0.0003108978271484375, "__label__art_design": 0.0003209114074707031, "__label__crime_law": 0.0002932548522949219, "__label__education_jobs": 0.0006361007690429688, "__label__entertainment": 8.916854858398438e-05, "__label__fashion_beauty": 0.0001341104507446289, "__label__finance_business": 0.00025272369384765625, "__label__food_dining": 0.0003383159637451172, "__label__games": 0.0005288124084472656, "__label__hardware": 0.0010061264038085938, "__label__health": 0.0004858970642089844, "__label__history": 0.0002720355987548828, "__label__home_hobbies": 8.90493392944336e-05, "__label__industrial": 0.0004360675811767578, "__label__literature": 0.0002951622009277344, "__label__politics": 0.00028252601623535156, "__label__religion": 0.00048470497131347656, "__label__science_tech": 0.057342529296875, "__label__social_life": 9.638071060180664e-05, "__label__software": 0.00952911376953125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.0002942085266113281, "__label__transportation": 0.0005536079406738281, "__label__travel": 0.0002073049545288086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52475, 0.01949]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52475, 0.51934]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52475, 0.88552]], "google_gemma-3-12b-it_contains_pii": [[0, 906, false], [906, 1404, null], [1404, 3330, null], [3330, 6457, null], [6457, 9436, null], [9436, 11221, null], [11221, 13824, null], [13824, 16913, null], [16913, 18934, null], [18934, 22053, null], [22053, 24349, null], [24349, 27371, null], [27371, 29076, null], [29076, 32071, null], [32071, 34808, null], [34808, 35899, null], [35899, 39118, null], [39118, 41210, null], [41210, 43532, null], [43532, 46548, null], [46548, 49607, null], [49607, 52475, null]], "google_gemma-3-12b-it_is_public_document": [[0, 906, true], [906, 1404, null], [1404, 3330, null], [3330, 6457, null], [6457, 9436, null], [9436, 11221, null], [11221, 13824, null], [13824, 16913, null], [16913, 18934, null], [18934, 22053, null], [22053, 24349, null], [24349, 27371, null], [27371, 29076, null], [29076, 32071, null], [32071, 34808, null], [34808, 35899, null], [35899, 39118, null], [39118, 41210, null], [41210, 43532, null], [43532, 46548, null], [46548, 49607, null], [49607, 52475, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52475, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52475, null]], "pdf_page_numbers": [[0, 906, 1], [906, 1404, 2], [1404, 3330, 3], [3330, 6457, 4], [6457, 9436, 5], [9436, 11221, 6], [11221, 13824, 7], [13824, 16913, 8], [16913, 18934, 9], [18934, 22053, 10], [22053, 24349, 11], [24349, 27371, 12], [27371, 29076, 13], [29076, 32071, 14], [32071, 34808, 15], [34808, 35899, 16], [35899, 39118, 17], [39118, 41210, 18], [41210, 43532, 19], [43532, 46548, 20], [46548, 49607, 21], [49607, 52475, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52475, 0.04823]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8f76129a43781a956b7ab7eb278426648cbcf5bb
|
Models, tasks, RT operating systems and schedulability
Marco Di Natale
Associate Professor, Scuola S. Anna - Italy, UTRC Visiting Fellow
The V-shape development cycle (V-model)
- User Requirements
- Functional Specifications
- Functional modeling
- Architecture Exploration
- Component modeling
- Behavior modeling
- Coding
- Validation
- System verification
- Integration testing
- Module testing
A development cycle
On August 19, 1418, a competition was announced in Florence, where the city’s magnificent new cathedral, Santa Maria del Fiore, had been under construction for more than a century.
*Whoever desires to make any model or design for the vaulting of the main Dome of the Cathedral under construction by the Opera del Duomo—for armature, scaffolding or other thing, or any lifting device pertaining to the construction and perfection of said cupola or vault shall do so before the end of the month of September. If the model be used he shall be entitled to a payment of 200 gold Florins.*
Competition between architects was an old and honored custom. Patrons had been making architects compete against one another for their commissions since at least 448 B.C., when the Council of Athens held a public competition for the war memorial it planned to build on the Acropolis. Under these circumstances, it was normal practice for architects to produce models as a means of convincing patrons or panels of judges of the virtues of their particular designs.
Engineering has made use of models since its very early days.
Filippo Brunelleschi’s design for the dome of the cathedral of Santa Maria del Fiore in Florence remains one of the most towering achievements of Renaissance architecture. Completed in 1436, the dome remains a remarkable feat of design and engineering. Its span of more than 140 feet exceeds St Paul's in London and St Peter's in Rome, and even outdoes the Capitol in Washington, D.C., making it the largest dome ever constructed using bricks and mortar. When work on the dome began in 1420 Brunelleschi was virtually unknown. Sixteen years later the dome was built, and its architect was a superstar.
Model-based design flow
- Typical flow, updated in V-shape or iterative fashion or V-shape plus iterative ….
- The four tenets on the right are fundamental to model-based design
- Of course, you must select a modeling language that allows to do everything in the most natural and easy way …
My perspective …
- I am from the schedulability analysis/time analysis community (almost an outsider… more on this later)
- Most of us are Operating Systems people, some with programming languages (Ada) background
- Our world consists of program functions, called in the context of threads (tasks) executed under the control of an OS
- Non surprisingly, close to the AUTOSAR model
- A possible system model is:
Given a set of tasks $T = \{\tau_1, \tau_2, \ldots \tau_n\}$ each characterized by a model $\tau_i = \{C_i, T_i, D_i, p_i, E_i\}$, where $C_i$ is the worst-case execution time, $T_i$ its period, $D_i$ its deadline, $p_i$ its priority, and $E_i$ the CPU on which it is allocated for execution …
Unfortunately, tasks are hardly the starting point
Where are the tasks?
- When we asked the industry why they did not apply our (worst-case) time analysis the common response was: “we have functional (correctness) problems and maintenance problems well before deadlines problems”
- However, tasks are definitely there …
- Should the designer “see” them and control their creation? How?
- Should they be the product of synthesis and optimization tools? What tools?
- And the same should be said for a complex (CPS) execution platform
Schedulability (real-time) analysis
- Predictability typically means that it is possible to compute the *worst-case response time* of a task without excessive pessimism.
- Other communities have different goals/objectives/definitions when it comes to time constraints (the previous one is quite weak when modeling controls or safety-critical functions that cannot tolerate jitter).
- In the end the risk is to have separate communities, each of us with our hammer looking at a world consisting of (our type of) nails:
- Assuming our models this is what we can deliver …
- How many models are needed to capture a modern complex CPS?
- How good/realistic/capable of dealing with the required complexity are our models?
What happened to our timing analysis?
(from the real-time community)
- We have been fairly successful in developing runtime algorithms for resource management that made into operating systems and communication protocols and improved their predictability (as in the previous definition)
- Examples:
- Priority Inheritance (Mars Pathfinder)
- The automotive OSEK standard
- Influenced several other standards (CAN bus)
What happened to our timing analysis?
- But really, almost nobody *really* tries to predict the worst-case response time using our formulas
- Not as much as you would expect
- Little use of design time analysis (until now) despite possible needs and several tools
- *Maybe the task model is not the right starting point …*
- *Maybe we needed a better integration between the analysis and mainstream design/modeling methodologie/languages*
- Starting from the late 90’s there has been an attempt to bring the concepts of schedulability analysis into UML
- A neighboring domain … UML originated from the Object-oriented programming language/SW modeling communities
- Possibly one branch of software engineering
• **Design (continued):** matching the logical design into the SW architecture design
**RTS and Platform-Based Design**
- **Task and resource model**
- **RTOS API**
- **Timing attributes (from platform deployment)**
- **Timing constraints (from functional model)**
Models and implementation: Simulink
Where are the tasks?
Models and implementation: UML
This Class Diagram is an early, pre-task design view of class relationships, based on the Object design interaction models. The diagram would be considerably enhanced as further implementation detail was added.
Where are the tasks?
Models and implementation: UML
Models and implementation: FSM
NOTE: stutter = \{(absent, absent, absent)\}
Model-based design: a functional view
• Advantages of model-based design
– Advance verification of correctness of (control) algorithms
• Possible approaches
1. *The model is developed considering the implementation and the platform limitations*
– include from the start considerations about the implementation (tasking model and HW)
• PROS (apparent)
– use knowledge about the platform to steer the design towards a feasible solution (in reality, this is often a trial-and-error manual process)
• CONS (true)
– the model depends on the platform (updates/changes on the platform create opportunities or more often issues that need to be solved by changing the model)
– Analysis is more difficult, absence of layers makes isolating errors and causes of errors more difficult
– the process is rarely guided by sound theory (how good is the platform selection and mapping solution?)
– Added elements (Rate-transition blocks) introduce delays
2. The model is developed as a “pure functional” model according to a formally defined semantics, irrespective of the possible implementation
– The model is then refined and matched to a possible implementation platform. Analysis tools check feasibility of an implementation that refines the functional semantics and suggest options when no implementation is feasible (more …)
Model-based design: a functional view
- Advantages of model-based design starting from a purely functional model
- Possibility of advance verification of correctness of (control) algorithms
- Irrespective of implementation
- This allows an easier retargeting of the function to a different platform if and when needed
- The functional design does not depend on the platform
- The verification of the functional design can be performed by domain experts (control engineers) without knowledge of SW or HW implementation issues
- Necessary assets to leverage these advantages …
- Capability of defining rules for the correct refinement of a functional model into an implementation model on a given platform
- Capability of supporting design iterations to understand the tradeoffs and the changes that are required when a given functional model cannot be refined (mapped) on a given platform
Model-based development flow
- Platform-based design
![Diagram of model-based development flow]
- **Functional model**: Independent of Platform
- **System platform model**: (possibly the level of the SW implementation in tasks and messages) Independent from both and suitable for evaluation of mapping solutions
- **Execution architecture model**: Independent of Functionality
Platform-dependent modeling: an example
This model demonstrates how to simulate and generate code using the example SetAlarm and ActivateTask blocks for the OSEK real-time operating system. This model contains three function-call subsystems, "Red", "Green", and "Blue" that are generated with RTW-ICE as separate OSEK Tasks and thus execute based on assigned priority using the OSEK scheduler. A generic OSEK main program and OIL file are generated by the ERT File customization template: osek_file_process.tlc. You can modify this template to provide detailed information for your specific OSEK implementation.
PBD and RTOS/platform
Refinement into a set of concurrent tasks exchanging messages
SR modeling (Simulink)
Platform API (OSEK/AUTOSAR)
Application instance
Platform instance
Dist. system w. asynchronous network (CAN)
Dist. system w. time-triggered network (FlexRay)
Single-processor w. priority-based RTOS
Functional representation: SR Simulink modeling
- Functional model: “zero-logical-time” execution or (no notion of platform or computation time)
- The output update and state update functions are computed immediately at the time the block is triggered/activated
- “the system response or reaction is guaranteed to be completed before the next system event”.
- The only significant references to time are the sampling times (or trigger events) of blocks
- Also, the partial order in the execution of blocks because of feedthrough behavior must be considered.
Semantics options
• Signals are persistent (Simulink)
• Signals are not persistent
• Algebraic loops (causal loops without delays) result in a fixed point and lack of compositionality
Semantics and Compositionality
• Semantics problem: systems compositions do not behave according to the semantics of the components
– The problem is typical of SR semantics when there are causal cycles: existence of a fixed point solution cannot be guaranteed (i.e. the system may be ill-defined)
– When multirate blocks are in a causal loop the composition is always not feasible
Functional representation: SR Simulink modeling
- Simulink system = networks of blocks
\[ S = \{b_1, b_2, \ldots, b_n\} \]
- Blocks can be Regular or Stateflow blocks
- Regular blocks can be Continuous or Discrete type.
- All types operate on (right)continuous type signals.
- Blocks may have a state \( S_j \) or may be stateless.
Functional representation: SR Simulink modeling
- Continuous-type blocks are defined by a set of differential equations
- Discrete-type blocks are activated at events $e_j$ belonging to a periodic sequence with 0 offset and period $T_j$
- When a model generates code, continuous blocks must be implemented by a fixed-step solver, with period $T_b$
- $T_b$ (base period) must be a divisor of any other $T_j$ in the system
```
\[
\begin{align*}
\dot{i}_j &
\end{align*}
\]
```
```
\[
\begin{align*}
\bar{i}_j &
\end{align*}
\]
```
```
\[
\begin{align*}
o_{j,p} &
\end{align*}
\]
```
```
\[
\begin{align*}
\bar{o}_j &
\end{align*}
\]
```
```
\[
\begin{align*}
b_j &
\end{align*}
\]
```
At each $e_j$ the block computes its output update and state update functions, updating the values on its output signals:
$$S_j^{\text{new}}, \overline{o}_j = f(S_j, \overline{i}_j)$$
Stateflow (or state machine) blocks react to a set of events $e_{j,v}$, derived from signals (generated at each rising or falling edge).
As such, events belong to a set of discrete time bases $kT_{jv}$.
Simulink models (execution order - feedthrough)
Stateflow machines are extended (synchronous) FSMs with hierarchical and parallel states
Simulink models (execution order - feedthrough)
Transition notation
And quite a few issues … (transition actions can generate events)
For more info:
N. Scaife, C. Sofronis, P. Caspi, S. Tripakis, and F. Maraninchi *Defining and translating a "safe" subset of Simulink/Stateflow into Lustre*. 4th ACM International Conference on Embedded Software (EMSOFT04), Pisa, Italy, September 2004
Simulink models (execution order - feedthrough)
Most blocks are of type feedthrough or Mealy-type (output does depend on input). This implies a precedence constraint in the computation of the block output functions.
Simulink models (not feedthrough)
Integrator (output does not depend on input but only on state)
Simulink models (SR)
Example of generated code
```c
/* Model step function */
void Subsystem_step(void)
{
/* Output: '<Root>/Out1' incorporates:
* Discrete Integrator: '<S1>/Discrete-Time integrator'
*/
Subsystem_Y.Out1 = Subsystem_DWork.DiscreteTimeIntegrator_DSTATE;
/* Update for Discrete Integrator: '<S1>/Discrete-Time integrator' */
Subsystem_DWork.DiscreteTimeIntegrator_DSTATE =
Subsystem_P.DiscreteTimeIntegrator_gai * Subsystem_U.In1 +
Subsystem_DWork.DiscreteTimeIntegrator_DSTATE;
}
```
Simulink models (execution order - feedthrough)
If two blocks $b_i$ and $b_j$ are in an input-output relationship (one of the outputs of $b_i$ is the input of $b_j$), and $b_j$ is of type feedthrough), then
$$b_i \rightarrow b_j$$
In case $b_j$ is not of type feedthrough, then the link has a delay,
$$b_i \rightarrow b_j^{-1}$$
Simulink models (execution order - feedthrough)
Let \( b_i(k) \) represent the \( k \)-th occurrence of \( b_i \) (belonging to the set \( \bigcup_v kT_{i,v} \) if a state machine block, or \( kT_i \) if a standard block), a sequence of activation times \( a_i(k) \) is associated to \( b_i \).
Given \( t \geq 0 \), \( n_i(t) \) is the number of times \( b_i \) is activated before or at \( t \).
In case \( b_i \rightarrow b_j \), if \( i_j(k) \) is the input of the \( k \)-th occurrence of \( b_j \), then this input is equal to the output of the last occurrence of \( b_i \) that is no later than the \( k \)-th occurrence of \( b_j \)
\[
i_j(k) = o_i(m); \text{ where } m = n_i(a_j(k))
\]
If \textit{the link has a delay} , then the previous output value is read,
\[
i_j(k) = o_i(m - 1);
\]
Simulink models (execution order - feedthrough)
May be a problem in a code implementation with (scheduling) delays
Simulink models: rates and deadlines
the system response or reaction must be completed before the next system event
Each blockset is characterized by an execution rate
The result is a network of functions (output/state update) with a set of partial orders
Outline
• Functional vs. Execution model
• Semantics options
• Preserving semantics in refinements
– Verifying that the synchronous reaction assumption holds with respect to the actual (finite) computation times
– The behavior of the simulation (of the functional model –i.e. without RT blocks-) must be the same as the run-time behavior
• Communication behavior must be the same
• Outputs are produced before the following event (i.e. The system is not sensitive to whatever happens in between events)
• Tradeoffs in task implementations
– Multitask Model implementation by Real-Time Workshop and rate transition (RT) blocks
– Scheduling trade-offs (schedulability vs. added delays)
• References
Simulation of models
• Simulation of Multirate models
– order all blocks based upon their topological dependencies
– The RTW tool (meant for a single processor implementation) generates a total order based on the partial order imposed by the feedthrough semantics
– *In reality, there are many such total orders that satisfy the dependencies!*
• Other choices are possible
• *In multiprocessor implementations this can be leveraged to optimize the implementation*
– Then, for simulation, virtual time is initialized at zero
– The simulator scans the precedence list in order and execute all the blocks for which the value of the virtual time is an integer multiple of the period of their inputs
– Simulated execution means computing the block output and then computing the new state
From Models to implementation
• Simulink case
elist
<table>
<thead>
<tr>
<th>Purpose</th>
<th>List simulation methods in the order in which they are executed during a simulation</th>
</tr>
</thead>
</table>
| Syntax | elist m:mid [tid:TID]
elist <gcs | s:mid> [mth] [tid:TID]
elist <gcb | sid:bid> [mth] [tid:TID] |
| Description | elist m:mid lists the methods invoked by the system or nonvirtual subsystem method corresponding to the method id mid (see the where command for information on method IDs), e.g., |
```
sldebug 019): elist n:19
RootSystem.Outputs 'vdp' [tid=0] :
0:0 Integrator.Outputs 'x1' [tid=0]
0:1 Outport.Outputs 'Out1' [tid=0]
0:2 Integrator.Outputs 'x2' [tid=0]
...
```
<table>
<thead>
<tr>
<th>Block Id</th>
<th>Method</th>
<th>Block</th>
<th>Task Id</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Simulink models
Simulation of multirate models
- Simulation of multirate models: an example
- Simulation runs in virtual time. The virtual clock is updated at each step
Motivation: Model-based devel. issues
- The implementation of a SR model should preserve its semantics so to retain the validation and verification results. The implementation can use
- Single task executing at the base rate of the system
- A set of concurrent tasks, with typically one task for each execution rate, and possibly more.
Simulation: logical execution and communication time
The implementation of a Simulink model must define
- A set of tasks and a block-to-task mapping model $M(b_i, \tau_j, k)$
- block $b_i$ is executed in the context of task $\tau_j$ in the $k$-th position
- A scheduling policy (priority assignment)
- Communication mechanisms
In such a way that
- The partial order of execution is preserved
- Communication occurs according to the SR semantics
- Blocks execution rates are preserved
And (possibly)
- The use of memory is minimized
- Extensibility is maximized
Schedulability analysis
- The block update functions are annotated with their worst-case execution times: $\gamma_i$ for block $b_i$
- The set of tasks and the block-to-task mapping model $M(b_i, \tau_j, k)$ are defined
- The task execution time is
$$C_j = \sum_{M(b_i, \tau_j, k)} \gamma_i$$
- The block response time can be computed as
$$R_{i,j} = \sum_{M(b_m, \tau_j, p) \land M(b_i, \tau_j, k) \land p<k} \gamma_m + \sum_{q \in hp(j)} \left( \frac{R_{i,j}}{T_q} \right) C_q$$
Task model
- A task can execute a block if its rate is an integer divider of the block rate (rate constraints)
Q: what is the best block-to-task mapping?
**Pro**: No need to protect communication between E and F.
**Cons**: Less scheduling flexibility, limited priority inversion.
Simulink models: rates and deadlines
The system response or reaction must be completed before the next system event – if we abstract from the model and only look at the task code model, the analysis can be very pessimistic.
The generated code runs in the context of a 1 ms task, but reactions do not occur every 1 ms.
Simple periodic model: Worst-case exec time of a reaction to any event.
Simulink models: rates and deadlines
the system response or reaction must be completed before the next system event – if we abstract from the model and only look at the task code model, the analysis can be very pessimistic.
The generated code runs in the context of a 1 ms task, but reactions do not occur every 1ms.
Multiframe model: worst-case exec time of a reaction to each type of event.
Simulink models: rates and deadlines
The system response or reaction must be completed before the next system event – if we abstract from the model and only look at the task code model, the analysis can be very pessimistic.
The generated code runs in the context of a 1 ms task, but reactions do not occur every 1 ms.
Even a multiframe model should account for state dependencies in the evaluation of the worst case execution time of each frame – need to build the right demand bound function.
From Models to implementation
• **Simulink case (single task implementation)**
<table>
<thead>
<tr>
<th>Mode</th>
<th>Single-Rate</th>
<th>Multi-Rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>SingleTasking</td>
<td>Allowed</td>
<td>Allowed</td>
</tr>
<tr>
<td>MultiTasking</td>
<td>Disallowed</td>
<td>Allowed</td>
</tr>
<tr>
<td>Auto</td>
<td>Allowed (defaults to SingleTasking)</td>
<td>Allowed (defaults to MultiTasking)</td>
</tr>
</tbody>
</table>
From Models to implementation
- Simulink case (single task implementation)
Implementation of models
- Implementation runs in real-time (code implementing the blocks behavior has finite execution time)
- Generation of code: Singletask implementation
From Models to implementation
- Simulink case (single task implementation)
```c
rt_OneStep()
{
Check for interrupt overflow or other error
Enable "rt_OneStep" (timer) interrupt
ModelStep-- Time step combines output, logging, update
}
```
Single-rate `rt_OneStep` is designed to execute model step within a single clock period. To enforce this timing constraint, `rt_OneStep` maintains and checks a timer overrun flag.
Generation of code: multitask mode
- The RTW code generator assigns each block a task identifier (tid) based on its sample rate.
- The blocks with the fastest sample rates are executed by the task with the highest priority, the next slowest blocks are executed by a task with the next lower priority, and so on (Rate Monotonic)
Model implementation: single task
Easy but possibly inefficient
System base cycle = time to execute the longest system reaction
Model implementation: multi-task
Real-time execution: finite execution time and possible preemption
Inconsistent data
Model implementation: multi-task
Real-time execution: lack of time determinism (because of preemption)
Behavior different from simulation
From Models to implementation
- Multitask implementation
```c
rt_OneStep()
{
Check for base-rate interrupt overflow
Enable "rt_OneStep" interrupt
Determine which rates need to run this time step
ModelStep(tid=0) --base-rate time step
For i=1:NumTasks -- iterate over sub-rate tasks
Check for sub-rate interrupt overflow
If (sub-rate task i is scheduled)
ModelStep(tid=i) --sub-rate time step
EndIf
EndFor
}
```
Nondeterminism in time and value
• However, this can lead to the violation of the zero-execution time semantics of the model (without delays) and even to inconsistent state of the communication buffer in the case of
– low rate (priority) blocks driving high rate (priority) blocks.
– high rate (priority) blocks driving low rate (priority) blocks.
Adding determinism: RT blocks
• Solution: Rate Transition blocks
– added buffer space and added latency/delay
– relax the scheduling problem by allowing to drop the feedthrough precedence constraint
• The mechanism can only be implemented if the rates of the blocks are harmonic (one multiple of the other)
– Otherwise, it is possible to make a transition to the gcd of the blocks’ periods, at the price of additional space and delay
RT blocks: High rate/priority to low rate/priority
**COST**
- space: 1 additional set of variables for each link
- time: overhead of RT implement.
- performance: none
```
<table>
<thead>
<tr>
<th>pri</th>
<th>T</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
```
Consistency here is guaranteed by proving there is no preemption
Output update only
RT blocks: Low rate/priority to high rate/priority
COST
space: 2 additional set of variables for each link
time: overhead of RT implement.
performance: 1-unit delay (low rate period)
Consistency here is guaranteed by proving there is no preemption
Output update
State update
Protected RT
Output update
Limitations in the use of RT blocks (1)
Illegal rate transition found between 'untitled1/Counter Free-Running/Output' and 'untitled1/Unit Delay1'. Sample time 2s of 'untitled1/Counter Free-Running/Output' and sample time 3s of 'untitled1/Unit Delay1' must be integer multiples, but are currently not. You can resolve this by using a rate transition block whose parameter 'Ensure deterministic data transfer' is unchecked.
Tradeoffs and design cycles
• RT blocks are **not** a functional entity
– *but an implementation device*
• RT Blocks are only required
– because of the selection of the RM scheduling policy
in slow to fast transitions
– because of the possibility of preemption
in both cases
• In both cases, time determinism (of communication) is obtained at
the price of additional memory
• In the case of slow to fast transitions, the RT block also adds a
delay equal to the period of the slowest block
– This is only because of the Rate monotonic scheduling
– Added delays decrease the performance of controls
Consistency issues
- **Consistency issues in the 1-1 communication between blocks with different rates may happen:**
- When blocks are executed in concurrent tasks (activated at different rates or by asynchronous events)
- When a reader may preempt a writer while updating the communication variables (reader with higher priority than writer)
- When the writer can preempt the reader while it is reading the communication variables (writer with higher priority).
- *Necessary condition for data inconsistency is the possibility of preemption reader→writer or writer→reader*
- Also, we may want to enforce time determinism (flow preservation)
Consistency issues
- Also, a relaxed form of time determinism may be required
- Input coherency: when inputs are coming from multiple blocks, we want to read inputs produced by instances activated by the same event
Guaranteeing data consistency
- Demonstrate impossibility of preemption between readers and writers
- Appropriate scheduling of blocks into tasks, priority assignment, activation offsets and using worst-case response time analysis
- Avoid preemption between readers and writers
- Disabling preemption among tasks (blocks) (RES_SCHEDULER in OSEK)
- Allow preemption and protect communication variables
- Protect all the critical sections by
- Disabling interrupts
- Using (immediate) priority ceiling (semaphores/OSEK resources)
- Problem: need to protect each use of a communication variable. Advantage (does not require extra buffer memory, but only the additional memory of the protection mechanism)
- Lock-free/Wait-free communication: multiple buffers with protected copy instructions:
- Typically w. interrupt disabling or kernel-level code
- Problem: requires additional buffer memory (How much?). Advantage: it is possible to cluster the write/read operations at the end/beginning of a task, with limited change to existing code.
- The best policy may be a mix of all the previous, depending on the timing contraints of the application and on the communication configuration.
Demonstrating impossibility of preemption
- Assign priorities and offsets and use timing analysis to guarantee absence of preemption
- Input data:
- Mapping of functional blocks into tasks
- Order of functional blocks inside tasks
- Worst-case execution time of blocks (tasks)
- Priorities assigned to tasks
- Task periods
- (relative) Offset in the activation of periodic tasks ($o_{wr} =$ minimum offset between writer and reader activations, $O_{wr}$ maximum offset between the activations)
- Computed data
- Worst case response time of tasks/blocks (considering interferences and preemptions) $R_r$ for the writer $R_w$ for the reader
- Two cases:
- Priority writer > priority reader
- Priority reader > priority writer
Absence of preemption/High to low priority
- Condition for avoiding preemption writer → reader (no assumptions about relative rates of reader/writer)
\[ R_r \leq T_w - O_{wr} \]
Absence of preemption/Low to high priority
- Condition guaranteeing absence of preemption or reader to writer (reader $\rightarrow$ writer)
Both conditions are unlikely in practice
\[ o_{wr} \geq R_w \]
\[ O_{wr} = o_{wr} = 0 \land R_w \leq T_r \]
Absence of preemption/Low to high priority
- These conditions are ultimately used by the Rate Transition block mechanisms!!
\[
O_{wr} = \begin{cases}
0 & \text{if } w \text{ and } r \text{ are updated simultaneously} \\
0 & \text{if } w \text{ is updated before } r \\
R_w & \text{if } w \text{ is updated after } r \\
R_w \wedge T_r & \text{if } w \text{ and } r \text{ are updated simultaneously and } r \text{ is updated after } w \\
\end{cases}
\]
Avoiding preemption
- Disabling preemption
The response time of the high priority block/task is affected, need to check real-time properties.
Fixed-priority scheduling
- Examples of mappings and tradeoffs
### Table
<table>
<thead>
<tr>
<th>Block</th>
<th>$\gamma_i$</th>
<th>Block</th>
<th>$\gamma_i$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$F_1$</td>
<td>0.05</td>
<td>$F_7$</td>
<td>0.15</td>
</tr>
<tr>
<td>$F_2$</td>
<td>0.1</td>
<td>$F_8$</td>
<td>0.15</td>
</tr>
<tr>
<td>$F_3$</td>
<td>0.05</td>
<td>$F_9$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_4$</td>
<td>0.075</td>
<td>$F_{10}$</td>
<td>0.15</td>
</tr>
<tr>
<td>$F_5$</td>
<td>0.1</td>
<td>$F_{11}$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_6$</td>
<td>0.1</td>
<td>$F_{12}$</td>
<td>0.075</td>
</tr>
</tbody>
</table>
Fixed-priority scheduling
• Another example
<table>
<thead>
<tr>
<th>Block</th>
<th>$\gamma_i$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$F_1$</td>
<td>0.05</td>
</tr>
<tr>
<td>$F_2$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_3$</td>
<td>0.05</td>
</tr>
<tr>
<td>$F_4$</td>
<td>0.075</td>
</tr>
<tr>
<td>$F_5$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_6$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_7$</td>
<td>0.15</td>
</tr>
<tr>
<td>$F_8$</td>
<td>0.15</td>
</tr>
<tr>
<td>$F_9$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_{10}$</td>
<td>0.15</td>
</tr>
<tr>
<td>$F_{11}$</td>
<td>0.1</td>
</tr>
<tr>
<td>$F_{12}$</td>
<td>0.075</td>
</tr>
</tbody>
</table>
\[
\begin{align*}
\tau_1 & \quad \tau_1 = 1 \quad [y_1, y_2] \\
\tau_2 & \quad \tau_2 = 2 \quad [y_9, y_{10}] \\
\tau_3 & \quad \tau_3 = 2 \quad [y_7, y_3, y_8] \\
\tau_4 & \quad \tau_4 = 1 \quad [y_4, y_5] \\
\tau_5 & \quad \tau_5 = 1 \quad [y_{11}, y_{12}] \\
\tau_6 & \quad \tau_6 = 2 \quad [y_6]
\end{align*}
\]
Design/Scheduling trade-offs
However ...
- if the communication is fast-to-slow and the slow block completes before the next instance of the fast writer, the RT block is not required.
- if the communication is from slow to fast, it is possible to selectively preserve the precedence order (giving higher priority to the slow block) at the expense of schedulability.
- Two tasks at the same rate, one high priority, the other low priority.
An approach
**Required steps**
- Definition of the network of functional blocks with feedthrough dependencies
- Definition of the synchronous sets
- Priority assignment and mapping into tasks
- Definition of the block order inside tasks
![Diagram showing network of functional blocks]
- **Type1 RT**
- **Type2 RT**
Preserving streams
- What buffering mechanisms are needed for the general case?
- Event-driven activation
- One-to-many communication
Preserving streams
- What buffering mechanisms are needed for the general case?
- Stream preservation (requirement)
- Event-driven activation
- One to many communication
---
The value produced by this instance is read by this instance. ... and needs to be buffered in between.
This block instance is assigned a buffer entry at the time of its activation.
The entry is written at running time.
This reader instance is assigned the buffer entry at the time of its activation.
The entry is used by the reader at running time.
Preserving streams
- The time the buffer index is assigned (activation of the block) may differ significantly from the time when the index is actually used (at running time) because of scheduling delays
- Support from the OS is needed for assigning indexes at block activation times
Preserving streams
• Many issues
– Defining efficient mechanisms for assigning indexes to the writers and the readers (if they are executed at kernel level)
– Sizing the communication buffers (given the system characteristics, how many buffers are needed?)
It is not necessary to store all these (6) values, there are at most 3 readers at each time!
Model implementation: multi-task
- Efficient but issues with data integrity and time determinism
Q1: How many buffers do you need?
Q2: How do you define the index to be used (at activation time) and you pass to the runtime instance?
Buffer sizing methods
Two main methods
• preventing concurrent accesses by computing an upper bound for the maximum number of buffers that can be used at any given time by reader tasks. This number depends on the maximum number of reader instances that can be active at any time.
• Temporal concurrency control. The size of the buffer can be computed by upper bounding the number of times the writer can produce new values, while a given data item is considered valid by at least one reader.
Bounding the *maximum number of reader instances*
- the size is equal to the maximum number \( N \) of reader task instances that can be active at any time (the number of reader tasks if \( d \leq T \)), plus two more buffers: one for the latest written data and one for use by the writer [Chen97] (no additional information is available, and no delays on the links).
A linked list implementation may trade space for time (O(1) access)
Algorithm 1: Modified Chen’s Protocol for SR flow preservation - Writer part
Data: BUFFER [1,...,NB]; NB: Num of buffers
Data: READINGLP [1,...,n_{lp}]; n_{lp}: Num of lower priority readers
Data: READINGHP [1,...,n_{hp}]; n_{hp}: Num of higher priority readers
Data: PREVIOUS, LATEST
1. GetBuf();
2. begin
3. bool InUse [1,...,NB];
4. for i=1 to NB do InUse [i]=false;
5. InUse[LATEST]=true;
6. for i=1 to n_{lp} do
7. \quad j = READINGLP [i];
8. \quad if j /= 0 then InUse [j]=true;
9. end
10. for i=1 to n_{hp} do
11. \quad j = READINGHP [i];
12. \quad if j /= 0 then InUse [j]=true;
13. end
14. i=1;
15. while InUse [i] do ++i;
16. return i;
17. end
18. Writer_activation();
19. begin
20. integer widx, i;
21. widx = GetBuf();
22. PREVIOUS = LATEST;
23. LATEST = widx;
24. for i=1 to n_{hp} do CAS(READINGHP [i], 0, PREVIOUS);
25. for i=1 to n_{lp} do CAS(READINGLP [i], 0, LATEST);
26. end
27. Writer_runtime();
28. begin
29. Write data into BUFFER [widx];
30. end
Algorithm 2: Modified Chen’s Protocol for SR flow preservation - Readers
1. ReaderLP_activation();
2. begin
3. constant id; – Each lower priority reader has its unique id;
4. integer ridx;
5. READINGLP [id]=0;
6. ridx = LATEST;
7. CAS(READINGLP [id],0,ridx);
8. ridx = READINGLP [id];
9. end
10. ReaderHP_activation();
11. begin
12. constant id; – Each higher priority reader has its unique id;
13. integer ridx;
14. READINGHP [id]=0;
15. ridx = PREVIOUS;
16. CAS(READINGHP [id],0,ridx);
17. ridx = READINGHP [id];
18. end
19. Reader_runtime();
20. begin
21. Read data from BUFFER [ridx];
22. end
Temporal concurrency control
- Based on the concept of datum lifetime. The writer must not overwrite a buffer until the datum stored in it is still valid for some reader.
\[
l_{wr} = o_{wr} + \max(R_{ri})
\]
The writer simply writes at the next (modulo N) index. The writer can be reused when no reader can access it.
Combination
- A combination of the temporal concurrency control and the bounded number of readers approaches can be used to obtain a tighter sizing of the buffer.
- Reader tasks are partitioned into two groups: fast and slow readers. The buffer bound for the fast readers leverages the lifetime-based bound of temporal concurrency control, and the size bound for the slow ones leverages information on the maximum number of reader instances that can be active at any time. Overall, the space requirements are reduced.
Combination
- Readers of \( \tau_{wi} \) are sorted by increasing lifetime (\( l_i \leq l_{i+1} \)). The bound
\[
NB_{w_i,j} = \left\lfloor \frac{l_j}{T_w} \right\rfloor
\]
- Applies to readers with lifetime \( \leq l_j \) (fast readers).
- Once \( j \) is chosen, the bound is
\[
\min\left\{\left\lfloor \frac{l_j}{T_w} \right\rfloor + \sum_{i=j+1}^{NR_{w_i}} \min\left\{\left\lfloor \frac{R_{r_i}}{T_{r_i}} \right\rfloor\right\} + \max_{i=j+1}^{NR_{w_i}} \text{delay}[i]\right\}
\]
Buffer shared among fast readers
based on the number of reader instances inside the lifetime
Modeling Distributed Real-time systems
- Where is the task model, the implementation relation and the deployment model?
Distributed implementation of models
Need to characterize the scheduling delays (how? cosimulation?)
Remote blocks are no more reacting at the same time
Heterogeneous Network topology
Architectures are heterogeneous systems
Delays from network
A very simple model with oversampling .... Imagine the data streams between source blocks and the multiplier/comparator are exchanged over a network. These are the results seen by the control engineer at design time.
Delays from network
An example of the trade-offs between additional functional delays and scheduling feasibility
Delays from network
Designers may be tempted to ease the scheduling problem by choosing the instance of the receiving task/block.
Delays from network
Unfortunately, by doing so, the behavior is different from the one simulated with 0-delay.
Are the designers/developers fully aware of these issues?
How can we help them?
(Task and message design and scheduling are in the background)
Delays from network
Unfortunately, solutions like this are possible
\(\text{not to mention issues with low-level communication levels /drivers and custom code}\)
Architecture optimization vs features
- Active and Passive Safety
by Leen and Effernan – IEEE Computer
Active and Passive Safety
- **Passive safety** (reduced personal injury in event of an accident)
- **Active safety** (avoiding an accident)
**Key Systems and Technologies:**
- **ABS** (Antilock brake system)
- **ACC** (Adaptive cruise control)
- **BAS** (Brake assist system)
- **BiW** (Brake by wire)
- **CA** (Collision avoidance)
- **DbW** (Drive by wire)
- **EBD** (Electronic brakeforce distribution)
- **EMB** (Electromechanical brakes)
- **EMS** (Electromechanical steering)
- **ESP** (Electronic stability program)
- **ETC** (Electronic traction control)
- **SbW (wb)** (Steer by wire with mechanical backup)
**Timeline:**
- 1980: Safety cell
- 1990: ETC, BAS, ACC
- 2000: ESP, BiW, EBD
- 2020: Autonomous driving, Smart adaptive controls
**Technological Advancements:**
- Side impact protection
- Seat belt
- Automatic emergency cell
- Side air bag
- Underfloor concept
- Precrash action
- Road recognition (LDW)
- ACC (Distronic)
As with conventional cruise control, the driver specifies the desired velocity - ACC consistently maintains this desired speed.
In addition, the driver can enter the desired distance to a vehicle driving in front. If the vehicle now approaches a car travelling more slowly in the same lane, ACC will recognize the diminishing distance and reduce the speed through intervention in the motor management and by braking with a maximum of 0.2 to 0.3 g until the preselected distance is reached. If the lane is clear again, ACC will accelerate to the previously selected desired tempo.
## Evolution of Integrated Functions
### Pre-2004
- Stabilitrak 2
- Onstar emergency notification
- Speed-dependant volume
### to 2004
- ACC
### to 2010/12
- function6
- function5
### to 2012/14
- function13
- function12
- function11
- function10
### Post-2014
- function17
- function16
- function15
- function14
### Subsystem
- Brake
- HVAC
- Body
- Steering
- Suspension
- Object detection
- Environm. sensing
- Infotainment
- Occ. protection
- Exterior lighting
- Occupant information
- Engine
- Transmis.
- Telematics
Automotive architecture trends
• An increasing number of functions will be distributed on a decreasing number of ECUs and enabled through an increasing number of smart sensors and actuators
• today: > 5 buses and > 30 ECUs
• 90% of innovation in cars for the foreseeable future will be enabled through the Electronic Vehicle Architecture
• Transition from single-ECU Black-box based development processes to a system-level engineering process
• System-level methodologies for quantitative exploration and selection,
• From Hardware Emulation to Model Based Verification of the System
• Architectures need to be defined years ahead of production time, with incomplete information about (future) features
• Multiple non-functional requirements can be defined
Functional model
Input interface
$\begin{align*}
f_1 & \xrightarrow{S_1} f_2 \\
f_2 & \xrightarrow{S_2} f_3 \\
f_3 & \xrightarrow{S_3} f_4 \\
f_4 & \xrightarrow{S_4} f_5 \\
f_5 & \xrightarrow{S_5} f_6
\end{align*}$
Output interface
signal
period
is_trigger
precedence
Jitter constraint
deadline
Function
period
activation mode
Input interface
Output interface
Architecture model
Functional model
Execution architect. model
ECU
clk speed (Mhz)
register width
OSEK
bus speed (b/s)
Deployment model
Functional model
System platform model
Execution architect. model
Task
- period
- priority
- WCET
- activ. mode
Resource
- WCBT
Message
- CANId
- period
- length
- transm. mode
- is_trigger
Deployment model
Deployment: An example
End-to-end latencies
ECU and bus utilizations
Back to architecture synthesis
- **DAC 07 (GP)**
- Periods
- Activation modes
- **DATE 07 (MILP)**
- RTAS 07 (B&B)
- **RTSS 07 (MILP)**
- Extensibility
- RTAS 08 (MILP+search)
- Simulation annealing
- System Architecture
- Number and type of ECUs and buses
- System topology
- Function to ECU allocation
- Function to task mapping
- Flow To Implementation
- Task and message priorities
- System Functionality
- Mapping
- Performance Analysis
- Refinement
Approach: Mathematical Programming
• Why Mathematical Programming?
• (compared with search, genetic programming or SA …)
– Simplicity
• Problem represented with:
– Set of decision variables
– Constraints
– Objective function
• “automatically” handles cross dependency among selection choices
– Easier coding of multi-objective optimization
– Standardized approach
• Well established technique
• Sound theory, methods
• Availability of commercial solvers (in essence, search engines)
– How good is your solution?
• Provides safe estimate of optimal solution
• Provides intermediate solutions of increasing quality
• Challenge:
– Capture the problem and obtain efficient runtimes
(Example) Problem Formulation
Objective
Minimization of (average case) end-to-end latencies
Subject to
- Constraints on end-to-end latencies
- Constraints on messages size
- Constraints on utilization
- Constraints on message and task deadlines
- Semantics preservation constraints
Design objectives (optimization variables)
- Placement of tasks onto the CPUs
- Packing of signals to messages
- Assignment of priorities to tasks and messages
- Definition of activation modes/synchronization model
- Period optimization
**Periodic Activation Model**
High latency, but allows decoupling the scheduling problem
**End-to-end latency analysis**
**Periodic asynchronous activation model**
\[ l_{(i,j)} = \sum_{k: \sigma_k \in P(i,j)} (T_k + r_k) \]
where (approx.)
\[ r_i = C_i + \sum_{j \in hp(i)} \left[ \frac{r_i}{T_j} \right] C_j \]
Worst Case Response Times
Tasks: \[ r_i = c_i + \sum_{j \in \mathcal{hp}(i)} \left\lfloor \frac{r_i}{t_j} \right\rfloor c_j \quad \forall o_i \in \mathcal{T} \]
Messages: \[ r_i = c_i + b_i + \sum_{j \in \mathcal{hp}(i)} \left\lfloor \frac{r_i - c_i}{t_j} \right\rfloor c_j \quad \forall o_i \in \mathcal{M} \]
- Resource utilization
- Fraction of time the resource (ECU or bus) spends processing its objects (tasks or messages)
- Utilization bounds less than 100%
- To allow for future extensibility
\[ \left( \sum_{i: o_i \rightarrow R_j} \frac{c_i}{t_i} \right) \leq u_j \quad \forall R_j \in \mathcal{R} \]
Event-based Activation Model
Lower latency for high priority paths, jitter increases along the path
End-to-end latency analysis
Data-driven precedence constrained activation model
\[ l_{(i,j)} = \sum_{k: o_k \in P(i,j)} w_k \quad \text{(approx.)} \]
\[ w_i = C_i + \sum_{j \in h_P(i)} \left[ \frac{w_i + J_j}{T_j} \right] C_j \]
Design Process and Requirements
• Design optimization
$X$ space of design optimization variables, such as computation times, periods, placement, priorities …
Design Process and Requirements
- Design optimization
$X$ space of design optimization variables, such as computation times, periods, placement, priorities …
Design Process and Requirements
• Design optimization
X space of design optimization variables, such as computation times, periods, placement, priorities …
Constraints
Schedulability
Communication
…
Design Process and Requirements
- Design optimization
$X$ space of design optimization variables, such as computation times, periods, placement, priorities …
**Constraints**
- Schedulability
- Communication
- Model Semantics preservation
Sensitivity (extensibility)
Design Process and Requirements
- Design optimization
$X$ (discrete) space of design optimization variables, such as computation times, placement, priorities, periods …
**Constraints**
- Schedulability
- Communication
- Model Semantics preservation
- Extensibility
Design Process and Requirements
- Design optimization
X (discrete) space of design optimization variables, such as computation times, periods ...
**Constraints**
- Schedulability
- Communication
- Model Semantics preservation
- Extensibility
**Metrics**
- Control related
(Example) Problem Formulation
Objective
Minimization of (average case) end-to-end latencies
Subject to
- Constraints on end-to-end latencies
- Constraints on messages size
- Constraints on utilization
- Constraints on message and task deadlines
- Semantics preservation constraints
Design objectives (optimization variables)
- Placement of tasks onto the CPUs
- Packing of signals to messages
- Assignment of priorities to tasks and messages
- Definition of activation modes/synchronization model
- Period optimization
Stochastic analysis
Figure 5. Latency cdfs of two high priority representative messages in the test set
Figure 6. Latency cdfs of two low priority representative messages in the test set
62 msg set (subset of chassis bus). Low priority msg – Distributions of latencies
Statistical analysis of CAN msgs
- Collected distributions of CAN message latencies by simulation on automotive buses (5 “realistic msgs configurations” and 20+ more obtained by derivation with changes in the load)
Statistical analysis of CAN msgs
- Can we fit the latency cdf with a “well-known” statistical distribution?
- What would be the accuracy?
Fitting with a gamma distribution
An exponential fitting also returns good results!
Statistical analysis of CAN msgs
• Finally, can we estimate the offsets and the parameters of the Gamma distribution \((a, b)\) or \((\mu, b)\) for each message by regression from parameters of the message set like \(U_{i,r}, U_{i,hr}, Q_i, Q_i^{hr}\)?
\[
\mu_{i,k} = (Q_{i,k} + \beta_5)e^{\beta_6} + \beta_7 U_{i,hr} + (Q_i^{hr} + \beta_8)U_{i,hr}e^{\beta_9} + \beta_{10} U_{i,hr}
\]
Conclusions
• Schedulability theory and worst-case timing analysis …
– From the run-time domain to the design domain (already happening)
– From the analysis domain to the optimization (synthesis) domain
– Complemented by sensitivity analysis and uncertainty evaluation
• However …
– Typical deadline analysis is not enough!
– Tasks and messages are not the starting point (semantics preservation issues from functional models to tasking models)
– Worst case analysis needs to be complemented
– Mixed domains (time-triggered / event-triggered)
Q&A
Thank you!
|
{"Source-Url": "https://ptolemy.berkeley.edu/projects/chess/design/2012/lectures/EE249_15_ModelsRTOS.pdf", "len_cl100k_base": 12204, "olmocr-version": "0.1.53", "pdf-total-pages": 128, "total-fallback-pages": 0, "total-input-tokens": 171941, "total-output-tokens": 16841, "length": "2e13", "weborganizer": {"__label__adult": 0.0004177093505859375, "__label__art_design": 0.0016918182373046875, "__label__crime_law": 0.0003688335418701172, "__label__education_jobs": 0.0026798248291015625, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.00023472309112548828, "__label__finance_business": 0.0004968643188476562, "__label__food_dining": 0.0003650188446044922, "__label__games": 0.0013761520385742188, "__label__hardware": 0.0037593841552734375, "__label__health": 0.00041794776916503906, "__label__history": 0.0007219314575195312, "__label__home_hobbies": 0.0002551078796386719, "__label__industrial": 0.0013828277587890625, "__label__literature": 0.0005354881286621094, "__label__politics": 0.00033402442932128906, "__label__religion": 0.0007004737854003906, "__label__science_tech": 0.1263427734375, "__label__social_life": 0.0001424551010131836, "__label__software": 0.01216888427734375, "__label__software_dev": 0.84326171875, "__label__sports_fitness": 0.0003657341003417969, "__label__transportation": 0.001651763916015625, "__label__travel": 0.00027441978454589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48499, 0.01885]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48499, 0.3833]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48499, 0.84721]], "google_gemma-3-12b-it_contains_pii": [[0, 138, false], [138, 414, null], [414, 434, null], [434, 1484, null], [1484, 2149, null], [2149, 2441, null], [2441, 3151, null], [3151, 3690, null], [3690, 4414, null], [4414, 4840, null], [4840, 5554, null], [5554, 5821, null], [5821, 5879, null], [5879, 6144, null], [6144, 6175, null], [6175, 6252, null], [6252, 7260, null], [7260, 7638, null], [7638, 8544, null], [8544, 8924, null], [8924, 9537, null], [9537, 9851, null], [9851, 10418, null], [10418, 10605, null], [10605, 10991, null], [10991, 11326, null], [11326, 12015, null], [12015, 12200, null], [12200, 12405, null], [12405, 12543, null], [12543, 13057, null], [13057, 13274, null], [13274, 13372, null], [13372, 13393, null], [13393, 13913, null], [13913, 14246, null], [14246, 15049, null], [15049, 15165, null], [15165, 15424, null], [15424, 16138, null], [16138, 16942, null], [16942, 17809, null], [17809, 17825, null], [17825, 17981, null], [17981, 18376, null], [18376, 18891, null], [18891, 19383, null], [19383, 19495, null], [19495, 19666, null], [19666, 20059, null], [20059, 20455, null], [20455, 20952, null], [20952, 21361, null], [21361, 21437, null], [21437, 21612, null], [21612, 22045, null], [22045, 22374, null], [22374, 22504, null], [22504, 22624, null], [22624, 22764, null], [22764, 23237, null], [23237, 23590, null], [23590, 24032, null], [24032, 24361, null], [24361, 24669, null], [24669, 25092, null], [25092, 25712, null], [25712, 26363, null], [26363, 26581, null], [26581, 27789, null], [27789, 28535, null], [28535, 28715, null], [28715, 28967, null], [28967, 29422, null], [29422, 29566, null], [29566, 30004, null], [30004, 30686, null], [30686, 31129, null], [31129, 31448, null], [31448, 31587, null], [31587, 31873, null], [31873, 32120, null], [32120, 32406, null], [32406, 32762, null], [32762, 32997, null], [32997, 33492, null], [33492, 33930, null], [33930, 34902, null], [34902, 35528, null], [35528, 35849, null], [35849, 36368, null], [36368, 36951, null], [36951, 37072, null], [37072, 37227, null], [37227, 37299, null], [37299, 37537, null], [37537, 37651, null], [37651, 37782, null], [37782, 38038, null], [38038, 38201, null], [38201, 38305, null], [38305, 39249, null], [39249, 39830, null], [39830, 40358, null], [40358, 41122, null], [41122, 41496, null], [41496, 41621, null], [41621, 41855, null], [41855, 41925, null], [41925, 42399, null], [42399, 43165, null], [43165, 43690, null], [43690, 44008, null], [44008, 44627, null], [44627, 44961, null], [44961, 45121, null], [45121, 45281, null], [45281, 45484, null], [45484, 45755, null], [45755, 46024, null], [46024, 46300, null], [46300, 46825, null], [46825, 47097, null], [47097, 47313, null], [47313, 47538, null], [47538, 47925, null], [47925, 48484, null], [48484, 48499, null]], "google_gemma-3-12b-it_is_public_document": [[0, 138, true], [138, 414, null], [414, 434, null], [434, 1484, null], [1484, 2149, null], [2149, 2441, null], [2441, 3151, null], [3151, 3690, null], [3690, 4414, null], [4414, 4840, null], [4840, 5554, null], [5554, 5821, null], [5821, 5879, null], [5879, 6144, null], [6144, 6175, null], [6175, 6252, null], [6252, 7260, null], [7260, 7638, null], [7638, 8544, null], [8544, 8924, null], [8924, 9537, null], [9537, 9851, null], [9851, 10418, null], [10418, 10605, null], [10605, 10991, null], [10991, 11326, null], [11326, 12015, null], [12015, 12200, null], [12200, 12405, null], [12405, 12543, null], [12543, 13057, null], [13057, 13274, null], [13274, 13372, null], [13372, 13393, null], [13393, 13913, null], [13913, 14246, null], [14246, 15049, null], [15049, 15165, null], [15165, 15424, null], [15424, 16138, null], [16138, 16942, null], [16942, 17809, null], [17809, 17825, null], [17825, 17981, null], [17981, 18376, null], [18376, 18891, null], [18891, 19383, null], [19383, 19495, null], [19495, 19666, null], [19666, 20059, null], [20059, 20455, null], [20455, 20952, null], [20952, 21361, null], [21361, 21437, null], [21437, 21612, null], [21612, 22045, null], [22045, 22374, null], [22374, 22504, null], [22504, 22624, null], [22624, 22764, null], [22764, 23237, null], [23237, 23590, null], [23590, 24032, null], [24032, 24361, null], [24361, 24669, null], [24669, 25092, null], [25092, 25712, null], [25712, 26363, null], [26363, 26581, null], [26581, 27789, null], [27789, 28535, null], [28535, 28715, null], [28715, 28967, null], [28967, 29422, null], [29422, 29566, null], [29566, 30004, null], [30004, 30686, null], [30686, 31129, null], [31129, 31448, null], [31448, 31587, null], [31587, 31873, null], [31873, 32120, null], [32120, 32406, null], [32406, 32762, null], [32762, 32997, null], [32997, 33492, null], [33492, 33930, null], [33930, 34902, null], [34902, 35528, null], [35528, 35849, null], [35849, 36368, null], [36368, 36951, null], [36951, 37072, null], [37072, 37227, null], [37227, 37299, null], [37299, 37537, null], [37537, 37651, null], [37651, 37782, null], [37782, 38038, null], [38038, 38201, null], [38201, 38305, null], [38305, 39249, null], [39249, 39830, null], [39830, 40358, null], [40358, 41122, null], [41122, 41496, null], [41496, 41621, null], [41621, 41855, null], [41855, 41925, null], [41925, 42399, null], [42399, 43165, null], [43165, 43690, null], [43690, 44008, null], [44008, 44627, null], [44627, 44961, null], [44961, 45121, null], [45121, 45281, null], [45281, 45484, null], [45484, 45755, null], [45755, 46024, null], [46024, 46300, null], [46300, 46825, null], [46825, 47097, null], [47097, 47313, null], [47313, 47538, null], [47538, 47925, null], [47925, 48484, null], [48484, 48499, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48499, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48499, null]], "pdf_page_numbers": [[0, 138, 1], [138, 414, 2], [414, 434, 3], [434, 1484, 4], [1484, 2149, 5], [2149, 2441, 6], [2441, 3151, 7], [3151, 3690, 8], [3690, 4414, 9], [4414, 4840, 10], [4840, 5554, 11], [5554, 5821, 12], [5821, 5879, 13], [5879, 6144, 14], [6144, 6175, 15], [6175, 6252, 16], [6252, 7260, 17], [7260, 7638, 18], [7638, 8544, 19], [8544, 8924, 20], [8924, 9537, 21], [9537, 9851, 22], [9851, 10418, 23], [10418, 10605, 24], [10605, 10991, 25], [10991, 11326, 26], [11326, 12015, 27], [12015, 12200, 28], [12200, 12405, 29], [12405, 12543, 30], [12543, 13057, 31], [13057, 13274, 32], [13274, 13372, 33], [13372, 13393, 34], [13393, 13913, 35], [13913, 14246, 36], [14246, 15049, 37], [15049, 15165, 38], [15165, 15424, 39], [15424, 16138, 40], [16138, 16942, 41], [16942, 17809, 42], [17809, 17825, 43], [17825, 17981, 44], [17981, 18376, 45], [18376, 18891, 46], [18891, 19383, 47], [19383, 19495, 48], [19495, 19666, 49], [19666, 20059, 50], [20059, 20455, 51], [20455, 20952, 52], [20952, 21361, 53], [21361, 21437, 54], [21437, 21612, 55], [21612, 22045, 56], [22045, 22374, 57], [22374, 22504, 58], [22504, 22624, 59], [22624, 22764, 60], [22764, 23237, 61], [23237, 23590, 62], [23590, 24032, 63], [24032, 24361, 64], [24361, 24669, 65], [24669, 25092, 66], [25092, 25712, 67], [25712, 26363, 68], [26363, 26581, 69], [26581, 27789, 70], [27789, 28535, 71], [28535, 28715, 72], [28715, 28967, 73], [28967, 29422, 74], [29422, 29566, 75], [29566, 30004, 76], [30004, 30686, 77], [30686, 31129, 78], [31129, 31448, 79], [31448, 31587, 80], [31587, 31873, 81], [31873, 32120, 82], [32120, 32406, 83], [32406, 32762, 84], [32762, 32997, 85], [32997, 33492, 86], [33492, 33930, 87], [33930, 34902, 88], [34902, 35528, 89], [35528, 35849, 90], [35849, 36368, 91], [36368, 36951, 92], [36951, 37072, 93], [37072, 37227, 94], [37227, 37299, 95], [37299, 37537, 96], [37537, 37651, 97], [37651, 37782, 98], [37782, 38038, 99], [38038, 38201, 100], [38201, 38305, 101], [38305, 39249, 102], [39249, 39830, 103], [39830, 40358, 104], [40358, 41122, 105], [41122, 41496, 106], [41496, 41621, 107], [41621, 41855, 108], [41855, 41925, 109], [41925, 42399, 110], [42399, 43165, 111], [43165, 43690, 112], [43690, 44008, 113], [44008, 44627, 114], [44627, 44961, 115], [44961, 45121, 116], [45121, 45281, 117], [45281, 45484, 118], [45484, 45755, 119], [45755, 46024, 120], [46024, 46300, 121], [46300, 46825, 122], [46825, 47097, 123], [47097, 47313, 124], [47313, 47538, 125], [47538, 47925, 126], [47925, 48484, 127], [48484, 48499, 128]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48499, 0.03987]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
31ea927337f24ec566d5622c25a76043b2607f23
|
This work concerns a dissection of QNX: a proprietary, real-time operating system aimed at the embedded market. QNX is used in many sensitive and critical devices in different industry verticals and while some prior security research has discussed QNX, mainly as a byproduct of BlackBerry mobile research, there is no prior work on QNX exploit mitigations and secure random number generators. In this work, carried out as part of the master’s thesis of the first author, we present the first reverse-engineering and analysis of the exploit mitigations, secure random number generators and memory management internals of QNX versions up to and including QNX 6.6 and the brand new 64-bit QNX 7.0 released in March 2017. We uncover a variety of design issues and vulnerabilities which have significant implications for the exploitability of memory corruption vulnerabilities on QNX as well as the strength of its cryptographic ecosystem.
1 Introduction
QNX [17] is a proprietary, closed-source, Unix-like real-time operating system with POSIX support aimed primarily at the embedded market. Initially released in 1982 for the Intel 8088 and later acquired by BlackBerry, it forms the basis of BlackBerry OS, BlackBerry Tablet OS and BlackBerry 10 used in mobile devices as well as forming the basis of Cisco’s IOS-XR used in carrier-grade routers such as the CRS, the 12000 and the ASR9000 series. QNX also dominates the automotive market [61] (particularly telematics, infotainment and navigation systems) and is found in millions of cars from Audi, Toyota, BMW, Porsche, Honda and Ford to Jaguar and Lincoln. In addition, it is deployed in highly sensitive embedded systems such as industrial automation PLCs, medical devices, building management systems, railway safety equipment, Unmanned Aerial Vehicles (UAVs), anti-tank weapons guidance systems, the Harris Falcon III military radios, Caterpillar mining control systems, General Electric turbine control systems and Westinghouse and AECL nuclear powerplants.
The interest of high-profile actors in QNX-based systems is evidenced by a series of documents from the United States Central Intelligence Agency (CIA) obtained and released by WikiLeaks under the name ‘Vault 7’. These documents show an interest on part of the CIA’s Embedded Development Branch (EDB) of the Engineering Development Group (EDG) (which develops and tests exploits and malware used in covert operations) in targeting QNX [65].
In this work, we focus primarily on QNX’s ‘binary security’ i.e. its hardening against memory corruption exploitation, as well as the quality of its secure random number generators. More precisely, this work makes the following novel contributions:
- It presents the first reverse-engineering of the proprietary, closed-source QNX OS to document the internals of its memory manager, exploit mitigations (eg. NX memory, ASLR, stack canaries, RELRO) and secure random number generators (both the kernel PRNG and /dev/random), covering all QNX versions as of writing (ie. ≤ 6.6 and the newly released QNX 7.0).
- It presents the first analysis of the exploit
mitigations and secure RNGs on QNX \( \leq 6.6 \) and 7.0 and uncovers a variety of design issues and vulnerabilities which have significant implications for the exploitability of memory corruption vulnerabilities on QNX as well as the strength of its cryptographic ecosystem.
- As a result of this work, we disclosed the uncovered issues to the vendor and cooperated in drafting patches to help protect system end-users.
Given that there is, as discussed in Section 2.1, no prior work on QNX’s mitigations, secure random number generators or memory management internals, we consider this work a significant contribution to the state of the art in understanding QNX security as well as QNX OS internals more broadly.
In Section 2 we present an brief overview of QNX’s OS architecture, its security architecture and its memory management internals. We discuss the result of our reverse-engineering and analysis of the exploit mitigations of QNX versions up to and including 6.6 in Section 3 and those of QNX version 7.0 in Section 4. In Section 5 we present the results of our reverse-engineering and analysis of the secure random number generators of QNX versions \( \leq 6.6 \) and 7.0. Finally, in Section 6 we present our concluding remarks.
2 QNX Overview
2.1 Security History
Most of the relatively scarce public research available on QNX security has been the byproduct of research into Blackberry’s QNX-based mobile operating systems such as TABLET OS, BLACKBERRY OS and BLACKBERRY 10 [3, 13–15, 66] most of which has not focussed on QNX itself. Recent work by Plaskett et al. [1, 42] has focussed on QNX itself, particularly security of the InterProcess Communication (IPC), message passing and Persistent Publish Subscribe (PPS) interfaces as well as kernel security through system call fuzzing. When it comes to specific vulnerabilities the work done by Julio Cesar Fort [27] and Tim Brown [19] stands out in particular and the MITRE CVE database [25] reports, as of writing, 34 vulnerabilities most of which are setuid logic bugs or memory corruption vulnerabilities.
2.2 OS Architecture
QNX supports a wide range of CPU architectures and features a pre-emptible microkernel architecture with multi-core support ensuring virtually any component (even core OS components and drivers) can fail without bringing down the kernel. QNX itself has a small footprint but support is available for hundreds of POSIX utilities, common networking technologies (IPv4/IPv6, IPsec, FTP, HTTP, SSH, etc.) and dynamic libraries. As opposed to the monolithic kernel architecture of most general-purpose OSes, QNX features a microkernel which provides minimal services (eg. system call and interrupt handling, task scheduling, IPC message-passing, etc.) to the rest of the operating system which runs as a team of cooperating processes as illustrated in Figure 1. As a result, only the microkernel resides in kernelspace with the rest of the operating system and other typical kernel-level functionality (drivers, protocol stacks, etc.) residing in userspace next to regular user applications albeit separated by privilege boundaries. In QNX the microkernel is combined with the process manager in a single executable module called \texttt{PROC\_NT}. QNX \texttt{libc} converts POSIX function calls into message handling functions which pass messages through the microkernel to the relevant process. As of writing, the latest QNX release is version 7.0.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{QNX_Microkernel_Architecture.png}
\caption{QNX Microkernel Architecture [60]}
\end{figure}
Most elements of the QNX system architecture (such as the messaging layer, process & resource management, filesystem and networking functionality, etc.) are well described in prior work [42] and since this work focuses on memory corruption we will only discuss QNX memory management and the security architecture in ‘broad strokes’.
2.3 Security Architecture
As a Unix-like operating system QNX inherits a large part of the Unix security model, primarily in the form of user groups and associated permission-based access controls. QNX is certified to \textit{Common Criteria} ISO/IEC 15408 \textit{Evaluation Assurance Level (EAL) 4+}. The certification report [21] indicates that the \textit{Target of Evaluation (TOE)} boundary encompasses only the \texttt{PROC\_NT} system process (ie. the microkernel and process manager) and \texttt{libc}.
QNX features a strong separation between kernel- and userspace running everything except for the microkernel and process manager in kernelspace by assigning it to the \texttt{PROC\_NT} process which runs as root with PID 1. Other OS components run as
their own root processes in userspace next to non-OS processes. Separation between OS processes and non-OS processes comes down to a combination of enforcement of user permissions and additional sandboxing capabilities [54]. If a non-OS process is run as root, the only way to wall it off from the wider OS is by restricting its capabilities. On the other hand, capabilities can be assigned on a granular level allowing or disallowing access to system actions and resources meaning for many processes there is no need to run as root to perform their functionality. Security separation between userspace and kernelspace is also mediated in this fashion which does mean, however, that there is no ‘absolute isolation’ of the microkernel and a root user without significant capability restrictions (as is the default for most OS processes) can easily pivot into the microkernel by means of common kernel calls, access to sensitive devices (eg. /*dev/mem*/ or installation of Interrupt Service Routines (ISRs).
As such one should not confuse the safety guarantee that the crashing of one component does not lead to a crash of the entire system with a security guarantee that the compromise of one component could not lead to the compromise of the entire system. If no explicit capability restrictions are put in place by system integrators, nothing prevents a compromise of a process with the right privileges or capabilities from leading to arbitrary kernelspace code execution.
### 2.4 Memory Management
QNX offers a full-protection memory model placing each process within its own private virtual memory by utilizing the MMU as shown in Figure 2. On QNX every process is created with at least one main thread (with its own, OS-supplied stack) and any subsequently created thread can either be given a customly allocated stack by the program or a (default) system-allocated stack for that thread. QNX's virtual memory provides permission capabilities and the memory manager ensures inter-process memory access is mediated by privilege as well as capability checks [54]. QNX handles typical memory objects such as stacks, the heap, object memory (eg. video card memory mapped into userspace), shared libraries, etc. and has support for shared- and typed memory [57, 59]. The relevant memory manager internals are described in detail in Section 3.2.
For QNX versions up to and including 7.0, we illustrate QNX user- and kernel-space address boundaries, derived from reverse-engineering, in Tables 1 and 2. On QNX systems where ASLR is not enabled, libc is loaded by default at the addresses illustrated in Table 3. For QNX versions up to and including 6.6 on x86, the default user- and kernel-space layouts when ASLR is disabled are illustrated in Figures 3 and 4.
3 QNX ≤ 6.6 Exploit Mitigations
In this section we will present the results of our reverse-engineering and subsequent analysis of QNX's exploit mitigations and secure random number generator as they are implemented in QNX versions up to and including 6.6. QNX supports a variety of exploit mitigations as outlined in Table 4 and the compiler- and linker parts of these mitigations rely on the fact that the QNX Compile Command (QCC) uses GCC as its back-end [16]. On the operating system side of things, however, the mitigation implementations are heavily customized as we will see in this section.
We can also see from Table 4 that while basic mitigations (ESP, ASLR, SSP, RELRO) are supported, this is not the case for more modern ones (eg. CFI, Kernal Data & Code Isolation, etc.) which are becoming the norm in general purpose operating systems such as Windows or Linux. While some of these mitigations (eg. CFI, CPI, Vtable Protection) are mostly implemented in the compiler and several libraries, it is currently not clear to what degree they are (in)compatible with QNX's design.
We disclosed all discovered issues to the vendor and as a result fixes and improvements based on our suggestions were included in QNX 7.0 as documented in Section ??.
### 3.1 Executable Space Protection
Executable Space Protection (ESP), also referred to as Data Execution Prevention (DEP), NX memory or W+$X$ memory, is a mitigation that seeks to prevent attackers from executing arbitrary injected payloads through a Harvard-style code and data memory separation on Von Neumann processors by rendering data memory non-executable and ensuring code memory is non-writable. ESP can be implemented by either relying on hardware support (eg. the x86 NX bit or ARM WX bit) or by means of software emulation. QNX has support for hardware-facilitated ESP among most of the architectures which support it since version 6.3.2 as shown in Table 5.
Insecure ESP Default Policy (CVE-2017-XXXX): While QNX supports ESP for several architectures, its
When developing exploits, attackers rely on knowledge of the target application’s memory map for directing write and read operations as well as crafting code-reuse payloads. ADDRESS SPACE LAYOUT RANDOMIZATION (ASLR) [31] is a technique which seeks to break this assumption by ensuring memory layout secrecy via randomization of addresses belonging to various memory objects (eg. code, stack, heap, etc.) and rendering them hard to guess.
QNX has ASLR support since version 6.5 (not supported for QNX Neutrino RTOS Safe Kernel 1.0) but it’s disabled by default. QNX ASLR can be enabled on a system-wide basis by starting the PROCNTO microkernel with the -mR option [55] and disabled with the -M option. A QNX child process normally inherits its parent’s ASLR setting but as of QNX 6.6 ASLR can also be enabled or disabled on a per-process basis by using the on utility [51] (with the -AE and -AD options respectively). Alternatively, one can use the Spawn_ASRL_INVERT or POSIX_SPAWN_ASRL_INVERT flags with the spawn and posix_spawn process spawning calls. To determine whether or not a process is using ASLR, one can use the Dcmd_PROCF_INFO [49] command with the devctl [50] device control call and test for the _NTO_PF_ASLR bit in the flags member of the procfs_info structure.
As shown in Table 6, QNX ASLR randomizes the base addresses of userspace and kernelspace stack, heap and mmap’ed addresses as well as those of userspace shared objects (eg. loaded libraries) and the executable image (if the binary is compiled with PIE [16]). It does not, however, have so-called KASLR support in order to randomize the kernel image base address. The QNX Momentics Tool Suite development environment (as of version 5.0.1, SDP 6.6) does not have PIE enabled by default and indeed after an evaluation with a customized version of the checksec [34] utility we found that none of the system binaries (eg. those in /bin, /boot, /sbin directories) are PIE binaries in a default installation.
<table>
<thead>
<tr>
<th>Architecture</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>x86</td>
<td>✓ (requires PAE on IA-32e)</td>
</tr>
<tr>
<td>ARM</td>
<td>✓</td>
</tr>
<tr>
<td>MIPS</td>
<td>×</td>
</tr>
<tr>
<td>PPC 400</td>
<td>✓</td>
</tr>
<tr>
<td>PPC 600</td>
<td>✓</td>
</tr>
<tr>
<td>PPC 900</td>
<td>✓</td>
</tr>
</tbody>
</table>
### Table 5: QNX ≤ 6.6 Hardware ESP Support
<table>
<thead>
<tr>
<th>Memory Object</th>
<th>Randomized</th>
</tr>
</thead>
<tbody>
<tr>
<td>Userspace</td>
<td></td>
</tr>
<tr>
<td>Stack</td>
<td>✓</td>
</tr>
<tr>
<td>Heap</td>
<td>✓</td>
</tr>
<tr>
<td>Executable Image</td>
<td>✓</td>
</tr>
<tr>
<td>Shared Objects</td>
<td>✓</td>
</tr>
<tr>
<td>mprotect</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Kernelspace</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Stack</td>
<td>✓</td>
</tr>
<tr>
<td>Heap</td>
<td>✓</td>
</tr>
<tr>
<td>Kernel Image</td>
<td>×</td>
</tr>
<tr>
<td>mprotect</td>
<td>✓</td>
</tr>
</tbody>
</table>
### Table 6: QNX ≤ 6.6 ASLR Memory Object Randomization Support
We reverse-engineered QNX’s ASLR implementation (as illustrated in Figure 5) and found that it is ultimately implemented in two function residing in the microkernel: stack_randomize and map_find_va (called as part of mprotect calls). QNX uses the Executable and Linking Format (ELF) binary format and pro-
processes are loaded from a filesystem using the `exec*`, `posix_spawnp` or `spawn` calls which invoke the program loader implemented in the microkernel. If the ELF binary in question is compiled with PIE-support, the program loader will randomize the program image base address as part of an `mmap` call. When a loaded program was linked against a shared object, or a shared object is requested for loading dynamically, the runtime linker (contained in `libc`) will load it into memory using a series of `mmap` calls. A stack is allocated automatically for the main thread (which involves an allocation of stack space using `mmap`) and has its base address (further) randomized by a call to `stack_randomize` whenever a new thread is spawned, a dedicated stack is either allocated (and managed) by the program itself or (by default) allocated and managed by the system in a similar fashion. Userspace and kernelspace heap memory allocation, done using functions such as `malloc`, `realloc` and `free`, ultimately relies on `mmap` as well. In `kernel`, a dedicated stack is allocated for each processor using a call to `_salloc` and thus relies on `mmap`. As such, all ASLR randomization can be reduced to analysis of `stack_randomize` and `map_find_va`:
```
Listing 1: QNX 6.6 vmm_mmap Routine
int vmm_mmap(PROCESS *prp, uintptr_t vaddr_requested, size_t size_requested,
int prot, int flags, size_t *sizep,
uint64_t boff, unsigned alignval,
unsigned preload, int fd, void **vaddrp,
size_t *sizep,
part_id_t mpart_id)
{
...
create_flags = flags;
...
if ( prp->flags & _NTO_PF_ASLR )
create_flags |= MAP_SPARE1;
r = map_create(..., create_flags);
}
```
```
Listing 2: QNX 6.6 map_create Routine
int map_create(struct map_set *ms, struct map_set *repl, struct mm_map_head *mh,
uintptr_t va, uintptr_t size, uintptr_t mask, unsigned flags)
{
...
if((flags & (MAP_FIXED|IMAP_GLOBAL)) { ...
repl->first = NULL;
va = map_find_va(mh, va, size, mask, flags);
if(va == VA_INVALID) {
r =ENOMEM;
goto fail1;
}
}
}
```
```
Listing 3: QNX 6.6 map_find_va Routine
uintptr_t map_find_va(struct mm_map_head *mh, uintptr_t va, uintptr_t size,
uintptr_t mask, unsigned flags)
{
sz_val = size - 1;
...
if ( flags & MAP_SPARE1 )
{
uint64_t clk_val = ClockCycles();
}
```
Figure 5: QNX ≤ 6.6 ASLR Memory Object Graph
`map_find_va`: As shown in Listings 1, 2 and 3, the QNX memory manager's `vmm_mmap` handler function invokes `map_create` and passes a dedicated mapping flag (identified only as MAP_SPARE1 in older QNX documentation) if the ASLR process flag is set. `map_create` then invokes `map_find_va` with these same flags, which randomizes the found virtual address with a randomization value obtained from the lower 32 bits of the result of the `ClockCycles` function. This 32-bit randomization value is then bitwise left-shifted by 12 bits and bitwise and-masked with 24 bits resulting in a value with a mask form of 0x00FFF000. i.e. a randomization value with at most 12 bits of entropy.
Dissecting QNX
unsigned int rnd_val = ((_DWORD) clk_val << 12) & 0xFFFFFF;
if ( flags & MAP_BELOW )
{
start_distance = start – best_start;
if ( start != best_start )
{
if ( rnd_val > start_distance )
rnd_val %= start_distance;
start –= rnd_val;
}
} else
{
end_distance = best_end – sz_val – start;
if ( best_end – sz_val != start )
{
if ( rnd_val > end_distance )
rnd_val %= end_distance;
start += rnd_val;
}
}
return rnd_sp;
Weak ASLR Randomization (CVE-2017-3892):
As observed above, the randomization underlying \texttt{mmap} has a theoretical upper limit of 12 bits of entropy and the additional randomization applied to \textit{userspace} stacks introduces at most 7 bits of entropy, combining into at most 19 bits of entropy with a mask of form 0x00FFFFFF. Addresses with such low amounts of entropy can be easily bruteforced (especially locally) and while ASLR on 32-bit systems is generally considered inherently limited [7] one should remember that these are upper bounds, ie. they express the maximum possible introduced entropy. Given that these upper bounds already compare rather unfavorably against the measurements of actual ASLR entropy in eg. Linux 4.5.0, PaX 3.14.21 and ASLR-NG 4.5.0 as per [7], this does not bode well.
All QNX ASLR randomization draws upon \texttt{ClockCycles} as the sole source of entropy. The QNX \texttt{ClockCycles} [46] kernel call returns the current value of a free-running 64-bit cycle counter using a different implementation per architecture as outlined in Table 7. Even though QNX’s usage of \texttt{ClockCycles} seems to provide 32 bits of ‘randomness’, it is an ill-advised source of entropy due to its inherent regularity, non-secrecy, and predictability.
### Table 7: QNX ClockCycles Implementations
<table>
<thead>
<tr>
<th>Architecture</th>
<th>Implementation</th>
</tr>
</thead>
<tbody>
<tr>
<td>x86</td>
<td>RDTSC</td>
</tr>
<tr>
<td>ARM</td>
<td>Emulation</td>
</tr>
<tr>
<td>MIPS</td>
<td>Count Register</td>
</tr>
<tr>
<td>PPC</td>
<td>Time Base Facility</td>
</tr>
<tr>
<td>SuperH</td>
<td>Timer Unit (TMU)</td>
</tr>
</tbody>
</table>
\textbf{Listing 4: QNX 6.6 stack\_randomize Routine}
```c
uintptr_t stack_randomize(const THREAD *thp, uintptr_t new_sp)
{
uintptr_t rnd_sp;
size_t stack_size;
...
}
```
...
In order to demonstrate this, we evaluated the entropic quality of QNX ASLR randomized addresses of several userspace memory objects. We did this with a script starting 3000 ASLR-enabled PIE processes per boot session and running 10 boot sessions, collecting 30000 samples per memory object in total. We used the NIST SP800-90B [41] Entropy Source Testing (EST) tool [38] in order to evaluate the entropic quality of the address samples by means of a min entropy estimate, illustrated in Table 8. Min entropy is a conservative way of measuring the (un)predictability of a random variable \( X \) and expresses the number of (nearly) uniform bits contained in \( X \), with 256 bits of uniformly random data corresponding to 256 bits of min entropy.
### Table 8: QNX 6.6 ASLR Userspace Memory Object Min Entropy
<table>
<thead>
<tr>
<th>Memory Object</th>
<th>Min Entropy (8 bits per symbol)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Stack</td>
<td>1.59986</td>
</tr>
<tr>
<td>Heap</td>
<td>1.00914</td>
</tr>
<tr>
<td>Executable Image</td>
<td>0.956793</td>
</tr>
<tr>
<td>Shared Objects</td>
<td>0.905646</td>
</tr>
</tbody>
</table>
From Table 8 we can see that, on average, a QNX randomized userspace memory object has a min entropy of 1.11785975. This means that it has a little more than 1 bit of min entropy per 8 bits of data. If we extrapolate this to the full 32 bits of a given address this means that the stack, heap, executable image and shared object base addresses have min entropy values of 6.39944, 4.03656, 3.827172 and 3.622584 respectively, with an average of 4.471439 bits of min entropy. This compares very unfavorably with the entropy measurements for various Linux-oriented ASLR mechanisms in [7].
On QNX, as is the case with many operating systems, child processes inherit the memory layout of parent processes. As a result when attacking forked or pre-forked applications an attacker can guess an ASLR address, after which the target child crashes and is restarted with an identical memory layout allowing the attacker to make another guess and so on. This facilitates both brute-force attacks and malicious child processes attacking siblings in Android Zygote-style models [23]. Given this memory layout inheritance, the fact that QNX ASLR provides only limited entropy and has no active relocation (i.e. memory object locations are never re-randomized), QNX ASLR is highly susceptible to brute-force attacks.
Finally it should be noted that CLOCKCycles is not a secure random number generator and by drawing directly from its output the clock cycle counter value acts analogous to a random number generator’s internal state. Contrary to a secure random number generator’s internal state, however, the clock cycle counter value is not considered secret and in fact it leaks everywhere (both to local users as well as via network services). As a result, an attacker in possession of the current clock cycle counter value could reconstruct the clock cycle counter value (in a fashion analogous to the work in [36]) at the time of memory object randomization. Given the current clock cycle counter value and an estimate on memory object initialization times, an attacker can deduce the clock cycle counter value at randomization time for a given memory object and reconstruct it as:
\[
\text{clock}_t = \text{clock}_c - ((\text{time}_c - \text{time}_t) \times \text{cycles}_s)
\]
where \( \text{clock}_t, \text{clock}_c, \text{time}_t, \text{time}_c \) and \( \text{cycles}_s \) are the target and current clock cycle counter and timestamp values and \( \text{cycles}_s \) is the number of cycle increments per second.
**proofs Infileak (CVE-2017-3892):** The proc filesystem (PROCFS) is a pseudo-filesystem on Unix-like operating systems that contains information about running processes and other system aspects via a hierarchical file-like structure. This exposure of process information often includes ASLR-sensitive information (e.g. memory layout maps, individual pointers, etc.) and as such has a history as a source for local ASLR infoleaks [12, 44, 62] with both GrSecurity and mainline Linux [26, 64] seeking to address PROCFS as an infoleak source. On QNX PROCFS [48] is implemented by the process manager component of PROCNTO and provides the following elements for each running process:
- **CMDLINE**: Process command-line arguments.
- **EXEFILE**: Executable path.
- **AS**: The virtual address space of the target process mapped as a pseudo-file.
These proveS entries can be interacted with like files and subsequently manipulated using the devctl [50] API to operate on a file-descriptor resulting from opening a PROCFS PID entry. Since process entries in QNX’s PROCFS are world-readable by default, this means a wide range of devctl-based information retrieval about any process is available to users regardless of privilege boundaries. For example, the QNX PIDIN [52] utility, which makes use of PROCFS to provide a wide range of process inspection and debugging options, easily allows any user to obtain stackframe backtraces, full memory mappings and program states for any process. This effectively constitutes a system-wide local information leak allowing attackers to circumvent ASLR. It should be noted this issue is not due to the availability of any particular utility (such as PIDIN) but rather results from a fundamental lack of privilege enforcement on...
### 3.3 Stack Smashing Protector
**Stack Smashing Protector (SSP) [43]** is a so-called _stack canary_ scheme which seeks to prevent the exploitation of stack buffer overflows by inserting a _secret_ and _unpredictably random_ canary value in between the local stack variables and the stackframe metadata (eg. saved return address, saved frame pointer). Any attempt at stack smashing which seeks to overwrite such metadata also ends up corrupting the canary value which is, upon function return, compared against the original master value so that when a mismatch is detected the SSP will invoke a failure handler.
QNX’s QCC implements the GCC SSP scheme [43] and supports all the usual SSP flags (strong, all, etc.). Since the compiler-side of the QNX SSP implementation is identical to the regular GCC implementation, the _master canary_ is stored accordingly and canary violation invokes the `__stack_chk_fail` handler.
#### Insecure User Canary Generation (CVE-2017-XXXX):
For _userspace applications_, this handler is implemented in QNX’s libc. On the OS-side, reverse-engineering of libc shows us that violation handler (shown in cleaned-up form in Listing 5) is a wrapper for a custom function called `__ssps_fail` which writes an alert message to the `/dev/tty` device and raises a SIGABRT signal. QNX generates its master canary value once upon program startup (during loading of libc) and it is not renewed at any time. Instead of the regular `libssp` function `__guard_setup`, QNX uses a custom function called `__init_cookies` (shown in Listing 6) invoked by the `_init_libc` routine in order to (among other things) generate the master canary value.
---
**Listing 5: QNX 6.6 Stack Canary Failure Handler (Userspace)**
```c
void __stack_chk_fail(void)
{
if (((fd = open("/dev/tty", 1)) != -1))
write(fd, "\033[31m\033[40m***_stack_smashing_detected***\033[0m\033[0m\033[0m");
raise(SIGABRT);
}
```
**Listing 6: QNX 6.6 Userspace Canary Generation**
```c
void __init_cookies(void)
{
void *stackval;
ts0 = (ClockCycles() & 0xffffffff);
can0 = ((ts0 ^ (((stackval) ^ can0) >> 8)));
_stack_chk_guard = can0;
ts1 = (ClockCycles() & 0xffffffff);
can1 = (((stackval) ^ can0) >> 8);
_atexit_list_cookie = (can1 ^ ts1);
ts2 = (ClockCycles() & 0xffffffff);
_atqexit_list_cookie = (can1 ^ ts2);
_stack_chk_guard &= 0xff00ffff;
}
```
performance purposes [28, 29]) rather than drawing it from a secure random number generator. The custom randomization routine draws upon three sources:
- **_init_cookies**: This is the function’s own address and as such, the only randomization it introduces is derived from ASLR’s effect on shared library base addresses which means that if ASLR is disabled (or circumvented) this is a static value.
- **stackval**: This is an offset to the current stack pointer and as such, the only randomization it introduces is derived from ASLR’s effect on the stack base which means that if ASLR is disabled (or circumvented) this is simply a static value.
- **ClockCycles**: The lower 32 bits of the result of a ClockCycles() call are used to construct the master canary value.
Since it includes a terminator-style NULL-byte, the QNX master canary value has a theoretical upper limit of 24 bits of entropy. However, all entropy in QNX stack canaries is ultimately based on invocations of the ClockCycles kernel call. If ASLR is enabled, the stackval address gets randomized when the main thread is spawned during program startup and the _init_cookies address gets randomized when libc gets loaded by the runtime linker. The r50 value is generated when _init_cookies is called by _init_libc which is invoked upon application startup (but after libc is loaded).
The first problem with using ClockCycles as a source of randomness is the limited entropy provided and thus the degree to which the canary is unpredictable and the size of the search space. In order to evaluate the entropic quality of QNX’s canary generation mechanism, we collected canary values for three different process configurations: No ASLR, ASLR but no PIE and ASLR with PIE. We used a script starting 785 instances of each configuration per boot session and repeated this for 10 boot sessions, collecting 7850 samples per configuration in total. We then used the NIST Entropy Source Testing (EST) tool [38] in order to obtain min entropy estimates for the sample sets as illustrated in Table 9. Based on these observations we can conclude that a) QNX canary entropy is far less than the hypothetical upper bound of 24 bits, being on average 7.78981332 bits for a 32-bit canary value and b) ASLR plays no significant contributing role to the overall QNX canary entropy.
Due to the absence of any canary renewal functionality [5], regular as well as byte-for-byte [67] brute-force attacks are feasible against QNX, especially considering the low entropic quality of the canaries.
<table>
<thead>
<tr>
<th>Settings</th>
<th>Min Entropy (8 bits per symbol)</th>
</tr>
</thead>
<tbody>
<tr>
<td>No ASLR</td>
<td>1.94739</td>
</tr>
<tr>
<td>ASLR, no PIE</td>
<td>1.94756</td>
</tr>
<tr>
<td>ASLR + PIE</td>
<td>1.94741</td>
</tr>
</tbody>
</table>
Table 9: QNX 6.6 Stack Canary Min Entropy
On top of entropy issues, ClockCycles is not a secure random number generator, as we discussed before with ASLR, and as such similar reconstruction attacks could be mounted against QNX canaries.
**Absent Kernel Canary Generation (CVE-2017-XXXX):** When it comes to kernellspace stack canary protection, the QNX microkernel (in the form of the PROCNTO process) also features an SSP implementation covering a subset of its functions. Since the kernel neither loads nor is linked against libc (and canary violations need to be handled differently), SSP functionality is implemented in a custom fashion here. Reverse-engineering of the microkernel showed that it has a custom __stack_chk_fail handler (illustrated in Listing 7) but no master canary initialization routine. As a result, QNX never actually initializes the kernel master canary value and hence its value is completely static and known to the attacker (0x00000000), rendering QNX kernel stack canary protection trivial to bypass.
**Listing 7: QNX 6.6 Stack Canary Failure Handler (Kernelspace)**
```c
void __stack_chk_fail(void)
{
kprintf("***_stack_smashing_detected_
in_procnto_***");
__asm{ int 0x22 };
}
```
3.4 Relocation Read-Only
QNX supports Relocation Read-Only (RELRO), a mitigation that allows for marking relocation sections as read-only after they have been resolved in order to prevent attackers from using memory corruption flaws to modify relocation-related information (such as Procedure Linkage Table (PLT) related Global Offset Table (GOT) entries using GOToverwriting [20]). RELRO comes in two variants: partial and full, with the former protecting non-PLT entries and the latter protecting the entire GOT. RELRO is implemented partially in the toolchain and partially in the operating system.
In most RELRO implementations, the compiler first stores constants requiring dynamic relocation in a dedicated section (typically named .DATA.REL.RO) before the linker creates a PT_GNU_RELRO program.
Dissecting QNX
A linker which reorders relocation sections so that they are grouped together, properly aligned for memory permission marking and precede the program data sections.
A linker which emits a GNU_RELRO segment covering all relocation sections as well as (for full RELRO) a BIND_NOW flag.
A dynamic linker which parses a binary for the GNU_RELRO segment and upon encountering it properly marks contained sections as read-only after applying relocations as well as immediately applying all relocations upon encountering a BIND_NOW flag.
Any disabling of RELRO functionality should be mediated by privilege checks.
We uncovered the following issues violating the above:
**Broken RELRO (CVE-2017-3893):** The GNU_RELRO segment emitted by QNX’s QCC linker only covers relocation up until the .DATA section but mistakenly the section order is not properly adjusted so that internal data sections (eg. .GOT) precede program data sections (eg. .DATA). As a result, the most security-critical relocation section (the PLT-related elements of the GOT) are not included in the GNU_RELRO segment and are thus not made read-only after relocation.
For example, on Debian Linux a full RELRO binary will look as pictured in figure 6. There we can see the GNU_RELRO segment covers the area from 0x08049000 to 0x0804a000 which includes .GOT. On QNX, however, the same full RELRO binary looks as pictured in figure 7. There we can see the GNU_RELRO segment covers the area from 0x08049000 to 0x0804a000 which does not include .GOT and thus allows us to violate RELRO with eg. a GOT-overwriting attack.
**Local RELRO Bypass (CVE-2017-3893):** On Unix-like systems the LD_DEBUG environment variable is used to pass debugging settings to the dynamic linker. QNX has a custom debugging option dubbed ‘IMPOSTER’ which, among other things, disables RELRO. Since there are no privilege checks on this debug setting, a low-privileged user could leverage it to target setuid binaries belonging to higher-privileged users in order to bypass any RELRO protections and thus ease exploitation.
4 QNX 7.0 Exploit Mitigations
QNX 7, released in January 2017, is the successor to QNX 6.6 and comes with support for 64-bit architectures such as ARM v8 or Intel x86-64. It comes with a host of new security features such as secure boot, integrity measurement, mandatory access control, host-based anomaly detection, granular sandboxing and secure software updates.
In this section we will document the results of our reverse-engineering and analysis of QNX 7.0’s exploit mitigations and secure random number generators and the degree to which they differ from and improve upon their predecessors in QNX 6.6 and earlier. Table 10 provides an overview of QNX 7.0’s exploit mitigations and their default settings.
4.1 Executable Space Protection
Despite our disclosure of the insecure default settings, it turns out they have not been fixed in QNX 7 and as such, the stack (but not the heap) is executable by default. In order to enable non-executable stacks system integrators have to start PROCNTRO with the -m-x flag. Unfortunately there is no way to guarantee per-binary backwards compatibility in this case as the
Dissecting QNX
<table>
<thead>
<tr>
<th>Mitigation</th>
<th>Support</th>
<th>Default</th>
</tr>
</thead>
<tbody>
<tr>
<td>ESP</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td>ASLR</td>
<td>✓</td>
<td>✗</td>
</tr>
<tr>
<td>SSP</td>
<td>✓</td>
<td>✓¹</td>
</tr>
<tr>
<td>RELRO</td>
<td>✓</td>
<td>✓¹</td>
</tr>
<tr>
<td>NULL-deref Protection</td>
<td>✗</td>
<td>n/a</td>
</tr>
<tr>
<td>Vtable Protection</td>
<td>✗</td>
<td>n/a</td>
</tr>
<tr>
<td>CFI</td>
<td>✗</td>
<td>n/a</td>
</tr>
<tr>
<td>CPI</td>
<td>✗</td>
<td>n/a</td>
</tr>
<tr>
<td>Kernel Data Isolation</td>
<td>✗</td>
<td>n/a</td>
</tr>
<tr>
<td>Kernel Code Isolation</td>
<td>✗</td>
<td>n/a</td>
</tr>
</tbody>
</table>
Table 10: QNX 7 Exploit Mitigation Overview
1. Default QNX Momentics IDE Settings, 2 eg. UDEREF / SMAP / PAN, 3 eg. KERNEXEC / SMEP / PXN
GNU_STACK ELF header is not parsed.
4.2 Address Space Layout Randomization
QNX 7 ASLR remains disabled by default and does not provide KASLR support for kernel image base randomization.
ASLR Randomization: As shown in Listing 8, QNX 7’s stack_randomize routine remains identical to that of QNX 6.6 save for replacing ClockCycles as the entropy source with random_value. However, the issue of the entropy being theoretically limited to an upper bound of 7 bits as a result of the application of the bitmasking remains.
Listing 8: QNX 7 stack_randomize Routine
```c
uintptr_t stack_randomize(const THREAD * thp, uintptr_t new_sp)
{
uintptr_t rnd_sp;
size_t stack_size;
unsigned int size_mask;
rnd_sp = new_sp;
stack_size = thp->un.lcl.stacksize >> 4;
if ( stack_size )
{
size_mask = 0x7FF;
if ( stack_size <= 0x7FE )
{
do
size_mask >>= 1;
while ( stack_size < size_mask );
}
rnd_sp = (new_sp - ((random_value() << 4 & size_mask)) & 0
xFFFFFFFFFFF000;
}
return rnd_sp;
}
```
The ASLR randomization underlying calls to mmap is now being handled by the kerext_valloc and vm_region_create functions as part of a rewritten memory manager. As shown in Listings 9 and 10 in both cases all entropy is drawn from random_value. In the former case the full 32 bits of entropy could be absorbed on a 64-bit system while in the latter case the bitmasking imposes a theoretical upper limit of 12 bits.
Listing 9: QNX 7 kerext_valloc Snippet
```c
void kerext_valloc(void *data)
{
...
if ( size_val != obj->size )
{
randomized_addr = (obj->addr + (random_value() << 12) % (obj->size - size_val)) & 0
xFFFFFFFFFFF000;
}
...
}
```
Listing 10: QNX 7 vm_region_create Snippet
```c
signed __fastcall vm_region_create(
vm_aspace_t *as, vm_mmap_attr_t *attr)
{
...
rnd_val = (random_value() << 12) & 0
xFFF000;
start_distance = start - best_start;
if ( start != best_start )
{
if ( start_distance < rnd_val )
rnd_val %= start_distance;
start -= rnd_val;
}
...
}
```
Curiously, however, QNX 7’s memory manager restricts initial randomized mapping of userspace stack, heap and executable image objects to the lower 32 bits of the address space while restricting shared libraries to the lower 36 bits.
Information Leaks: While the LD_DEBUG Infoleak (CVE-2017-9369) has been fixed in QNX 7, the procfs Infoleak (CVE-2017-3892) is still very much present with only some restrictions imposed on it. In QNX 7 procfs has been slightly modified so that interaction with processes now goes through the /proc/*/ctl pseudo-file. While the /proc/*/directories have stronger permission settings and the PIDIN tool no longer allows for direct disclosure of...
sensitive address information from higher-privileged processes, `/proc/*/ctl` remains world-readable for all process entries and accessible to the `devctl` API. As such, a local attacker is still able to disclose sensitive address information across privilege boundaries. While capability-based sandboxing might limit the exposure of certain processes this is not configured to be the case by default.
We have left correlation attack evaluation for QNX 7 to future work.
### 4.3 Stack Smashing Protector
SSP is enabled by default in QNX Momentics 7.0.0 and generates 64-bit canaries on 64-bit systems.
**Userspace Canary Generation:** The new `_init_cookies` routine in QNX 7 is shown in Listing 11 where we can see that `_stack_chk_guard` is formed by the XOR sum of `rdtsc`, the code address of `_init_cookies`, the stack address of `stackval` and the value stored in `auxil_val`. This approach is similar to the one in QNX 6.6 save for the introduction of `auxil_val` which is a 64-bit value drawn from the `AT_RANDOM` ELF auxiliary vector entry. ELF auxiliary vectors [30] are a mechanism to transfer OS information to user processes via the program loader. This approach was integrated into QNX 7.0 based on our suggestions to the vendor.
```
Listing 11: QNX 7 Userspace Canary Generation
void _init_cookies ()
{
auxv_t *auxil;
int auxil_type;
__int64 auxil_val;
unsigned __int64 c0;
unsigned __int64 c1;
char stackval;
auxil = auxv;
auxil_type = auxil->a_type;
if ( auxil_type )
{
while ( auxil_type != AT_RANDOM )
{
++auxil;
auxil_type = auxil->a_type;
if ( !auxil->a_type )
goto END_AUXV;
}
auxil_val = auxil->un.a_val;
}
else
{
END_AUXV:
auxil_val = 0LL;
}
c0 = _rdtsc() ^ ((unsigned __int64)
_init_cookies ^ (unsigned __int64)&
stackval) >> 8) ^ auxil_val;
_stack_chk_guard = (void *)(c0;
c1 = (unsigned __int64)&stackval ^ c0)
>> 8;
_atexit_list_cookie = (void *)(c1 ^
__rdtsc());
BYTE_OFFSET_6(_stack_chk_guard) = 0;
_atqexit_list_cookie = (void *)(c1 ^
__rdtsc());
}
```
Upon reverse-engineering the `loader_load` routine in the QNX microkernel as shown in Listing 12 we can see `AT_RANDOM` is filled with a concatenation of two 32-bit values drawn from the `random_value` kernel PRNG (discussed below in Section 5.2).
```
Listing 12: QNX 7 AT_RANDOM Generation
auxil_pointer->a_type = AT_RANDOM;
auxil_pointer->a.un.a_val = (unsigned int)random_value();
if ( interp_name[7] & 8 )
{
auxil_pointer->a.un.a_val |=
random_value() << 32;
...
}
```
**Kernelspace Canary Generation:** The absent kernelspace canary generation vulnerability affecting QNX 6.6 and prior has been fixed in QNX 7. During early boot in `kernel_main`, prior to kernel kickoff, the kernelspace master canary is drawn from a concatenation of two 32-bit random values drawn from the `random_value` kernel PRNG as shown in Listing 13.
```
Listing 13: QNX 7 Kernelspace Canary Generation
callin_init();
mdriver_check();
*(DWORD *)&inkernel = 0xD00;
c0 = random_value();
c1 = random_value();
_stack_chk_guard = (void *)((c1 << 32) | c0);
kker_exit_kickoff(percpu_ptr->data.
ker_stack);
```
### 4.4 Relocation Read-Only
The RELRO vulnerability we reported has been fixed in QNX 7 with QCC observing proper ELF section ordering and full RELRO being enabled by default in QNX Momentics 7.0.0.
5 QNX Secure Random Number Generators
5.1 /dev/random in QNX ≤ 6.6
Many mitigations require a source of secure randomness and ideally this is provided by the operating system itself. As such, the security of the OS random number generator is of crucial importance to the security of exploit mitigations as well as the overall cryptographic ecosystem. As prior work has shown [2, 4, 9, 10], embedded random number generation suffers from a variety of issues with far-reaching consequences and as such we reverse-engineered and analyzed the QNX OS random number generator.
QNX provides an Operating System Cryptographically Secure Random Number Generator (OS CSPRNG) exposed through the Unix-style /dev/random and /dev/urandom interfaces, both of which are non-/dev/urandom which has not undergone the security scrutiny Yarrow as well in the following aspects:
- No Blockcipher Applied To Output: As opposed to YARROW-160, YARROW 0.8.71 does not apply a block cipher (e.g. in CTR mode) to the YARROW internal state before producing PRNG output and instead simply outputs the internal state directly which results in a significantly weaker design than that of YARROW-160.
In addition, the QNX YARROW implementation diverges from YARROW 0.8.71 as well in the following aspects:
- Mixes PRNG Output Into Entropy Pool: As part of its various entropy collection routines, QNX YARROW mixes PRNG output back into the entropy pool. For example in the high performance counter entropy collection routine (as per the snippet in Listing 14) we can see PRNG output is drawn from QNX YARROW, used as part of a delay routine and subsequently mixed (via a xor operation with the result of a ClockCycles call) back into the entropy pool. This construction deviates from all YARROW designs and is ill-advised in the absence of further security analysis or justification.
Listing 14: QNX YARROW HPC Entropy Collection Snippet
```c
if ( Yarrow )
{
yarrow_output( Yarrow, (uint8_t *)&rdata , sizeof( rdata ) );
timeout = ( rdata & 0x3FF ) + 10;
}
delay( timeout );
clk = ClockCycles();
clk = clk ^ rdata ;
if ( Yarrow )
yarrow_input( Yarrow, (uint8_t *)&
clk , sizeof( clk ) , pool_id , 8 );
```
- Absent Reseed Control (QNX < 6.6): In all QNX versions prior to 6.6 reseed control is completely absent. While the required functionality was implemented, the responsible functions are never actually invoked, which means that while entropy is being accumulated during runtime it is never actually used to reseed the state and thus only boottime entropy is actually ever used to seed the QNX YARROW state in versions prior to 6.6.
- Custom Reseed Control (QNX 6.6): In QNX 6.6 there is a custom reseeding mechanism integrated into the yarrow_do_sha1 and yarrow_make_new_state functions (as illustrated in Listings 15 and 16) which are called upon PRNG initialization and whenever output
---
**Figure 8: Simplified QNX 6.6 YARROW Design**
The QNX YARROW implementation (as illustrated in Figure 8), however, is not based on the reference YARROW-160 [8] design but instead on an older YARROW 0.8.71 implementation by Counterpane [24] which has not undergone the security scrutiny Yarrow has seen over the years and differs in the following key aspects:
- Single Entropy Pool: While YARROW-160 has separate fast and slow entropy pools, YARROW 0.8.71 only has a single entropy pool. The two pools were introduced so that the fast pool could provide frequent reseeds of the YARROW key to limit the impact of state compromises while the slow pool provides rare, but very conservative, reseeds of the key to limit the impact of entropy estimates which are too optimistic. YARROW 0.8.71’s single pool does not allow for such
---
**Dissecting QNX**
Page 14 of 22
is drawn from the PRNG (which means it is also constantly called during entropy accumulation due to the above mentioned output mixing mechanism). In both cases, a permutation named IncGaloisCounter5X32 is applied to the entropy pool before the pool contents are mixed into a SHA1 state which eventually becomes the Yarrow internal state. Contrary to Yarrow design specifications, no entropy quality estimation is done before reseeding the state from the entropy pool thus potentially allowing for low-quality entropy to determine the entire state.
In order to evaluate the QNX Yarrow PRNG output quality we used two test suites: DieHARDer [18] and the NIST SP800-22 [40] Statistical Test Suite (STS) [39]. DieHARDer is a random number generator testing suite, composed of a series of statistical tests, "designed to permit one to push a weak generator to unambiguous failure" [18]. The NIST Statistical Test Suite (STS) consists of 15 tests developed to evaluate the 'randomness' of binary sequences produced by hardware- or software-based cryptographic (pseudo-) random number generators by assessing the presence or absence of a particular statistical pattern. The goal is to "minimize the probability of accepting a sequence being produced by a generator as good when the generator was actually bad" [40]. While there are an infinite number of possible statistical tests and as such no specific test suite can be deemed truly complete, they can help uncover particularly weak random number generators.
QNX Yarrow passed both the DieHARDer and NIST STS tests but this only tells us something about the quality of PRNG output, leaving the possibility open that raw noise / source entropy is (heavily) biased which can result in predictable PRNG outputs as well as attackers being able to replicate PRNG internal states after a reasonable number of guesses. As such we reverse-engineered and evaluated the QNX random service's boot- and runtime entropy sources.
**Boottime Entropy Analysis:** When random is initialized it gathers initial boottime entropy from the following sources (as illustrated in Figure 9) which are fed to the SHA1 hash function to produce a digest used to initialize the PRNG initial state:
- **Device Names:** The currently available device names by walking the /dev directory.
- **PIDs:** The currently active process IDs by walking the /proc directory.
- **ClockTime [47]:** The current system clock time.
- **ClockCycles [46]:** The current value of a free-running 64-bit clock cycle counter.
While all the above discussed divergences are at the very least ill-advised, the reseeding control issues constitute a clear security issue. In the case of absent reseeding control, it eliminates Yarrow’s intended defense against state compromise as well as greatly increasing system susceptibility to the so-called ‘boottime entropy hole’ [10] that affects embedded systems. In the case of the QNX Yarrow 6.6 custom reseeding control no entropy quality estimation is done before reseeding the state from the entropy pool.
that the average min-entropy was 0.02766687, which is far less than 1 bit of min-entropy per 8 bits of raw noise. In addition to the boottime entropy of individual boot sessions being of low quality, the static or minimally variable nature of many of the boottime noise sources (identical processes and devices available upon reboot, real-time nature of QNX limiting jitter between kernel calls thus reducing ClockCycles entropy, etc.) results in predictable and consistent patterns across reboots.
Another boottime entropy issue with QNX’s RANDOM service is the fact that the service is started as a process by startup.sh. As a result, the CSPRNG is only available quite late in the boot process and many services which need it (e.g. sshd) start almost immediately after. Since RANDOM only offers non-blocking interfaces, this means that one can draw as much output from the CSPRNG as one wants immediately upon availability of the device interface. Hence, many applications which start at boot and require secure random data have their ‘randomness’ determined almost completely by the (very low quality) boottime raw noise since there is little time for the QNX RANDOM service to gather runtime entropy before being queried thus amplifying the impact of the “boot-time entropy hole” [10].
Runtime Entropy Analysis: The QNX random service leaves the choice and combination of runtime entropy sources (as illustrated in Figure 10) up to the person configuring the system with the following options:
- **Interrupt Request Timing**: Up to 32 different interrupt numbers may be specified to be used as an entropy source. The entropy here is derived from interval timing measurements (measured by the ClockTime kernel call) between requests to a specific interrupt.
- **System Information Polling**: This source collects entropy from system information via the procfs [48] virtual filesystem in /proc. This information is composed of process and thread information (process and thread IDs, stack and program image base addresses, register values, flag values, task priority, etc.) for every currently active process.
- **High-Performance Clock Timing**: This source draws entropy from the PRNG (using the yarrow_output function), initiates a delay (in milliseconds) based on the PRNG output, invokes ClockCycles and xors the result against the earlier obtained PRNG output and feeds this into the entropy pool.
- **Library Hardware Entropy Source (Undocumented)**: This undocumented entropy source (invoked using command-line parameter -J) allows a user to specify a dynamic library to supply entropy collection callback functions named entropy_source_init and entropy_source_start. In order to be used the library has to export a symbol named cookie with the NULL-terminated value RNG (0x524E4700). Based on debugging information it seems this is to allow for drawing from a hardware random number generator as an entropy source.
- **User-Supplied Input (Undocumented)**: In QNX 6.6 the RANDOM service has a write-handler made available to users via the kernel resource manager (in the form of handling write operations to the /dev/(u)random interfaces) which takes arbitrary user inputs of up to 1024 bytes per write operation and feeds it directly into the entropy pool by passing it to the yarrow_input operation. Write operations of this kind are restricted to the root user only.
After initialization, RANDOM starts a thread for each entropy source which will gather entropy and store it in the entropy pool. Contrary to our analysis of QNX RANDOM’s boottime entropy, we did not perform a runtime entropy quality evaluation because during our contact with the vendor they had already indicated the current design would be overhauled in upcoming patches and future releases as a result of our findings. In addition, in all QNX versions except for 6.6 runtime entropy is accumulated but not used due to the previously mentioned absent reseeding control. We did have the following observations however:
- **Entropy Source Configuration**: Configuring runtime entropy sources is entirely left to system integrators. Since the entropic quality of certain sources (e.g. interrupt request timings or system information polling) varies depending on the particular system, it is non-trivial to pick suitable sources.
- **System Information Entropy Source**: System information polling gathers raw noise from
currently running processes (in the form of process and thread debug info). A significant number of the fields in the process and thread debug info structures, however, are largely static values (e.g. uid, flags, priority, stack and program base in the absence of ASLR, etc.) with most randomness derived from time-based fields (starttime, runtime) or program state (ip, sp).
- **Interrupt Request Timing Entropy Source**: Interrupt request timing gathers raw noise from interrupt invocation timings. As such this means that if integrators choose to specify interrupts that are rarely or never invoked, barely any runtime entropy is gathered using this source. Interrupt invocation frequency can be very system specific and picking the right interrupts is not trivial. The QNX documentation explicitly recommends to “minimize the impact of [interrupt request timing overhead] by specifying only one or two interrupts from low interrupt rate devices such as disk drivers and input/serial devices” [56], an advice which would result in less entropy being accumulated from this source. Furthermore, it seems that if for whatever reasons the RANDOM service cannot attach to an interrupt, the interrupt entropy gathering thread fails silently and no entropy is gathered for that interrupt at all.

**5.2 QNX 7.0 Kernel PRNG**
QNX 7 has a new kernel PRNG for generation of secure random numbers implemented in the microkernel's `random_value` function. As shown in Listing 17 and illustrated in Figure 11, the kernel PRNG consists of a 256-bit seed block fed through SHA256 to produce a digest from which 32-bit random numbers are drawn iteratively before reseeding after exhausting the entire digest.
**Listing 17: QNX 7 Kernel PRNG**
```c
unsigned int new_dig_idx;
unsigned int result;
uint32_t keypad[8];
sha256_t shout;
if ( dig_idx > 7 )
{
keypad[0] = salt ^ ClockCycles();
keypad[1] = actives[get_cunum()];
keypad[2] = salt ^ qtimeptr->nsec;
keypad[3] = pid_unique ^ salt;
keypad[4] = wakeup_timer;
keypad[5] = kernel_exit_count;
keypad[6] ^= random_seed;
sha256_init(&shout);
sha256_add(&shout, keypad, 0x20u);
sha256_done(&shout, digest);
result = digest[0];
if ( !salt )
salt = digest[0];
new_dig_idx = 1;
}
else
{
new_dig_idx = dig_idx + 1;
result = digest[dig_idx];
}
dig_idx = new_dig_idx;
return result;
}
```

Kernel PRNG entropy is drawn from a combination of the following values:
- **salt**: A salt value which starts out as 0 and then gets filled with the first non-zero 32 bits of every newly generated digest.
- **ClockCycles**: The current clock cycle counter value.
• active[get_cpu_num()]: The currently active thread on this CPU.
• timeptr->nsec: The current time in nanoseconds.
• pid_unique: The currently active PID.
• wakeup_timer: The timer wakeup value [45].
• kernel_exit_count: Counter keeping track of the number of kernel exit operations.
• random_seed: Random seed user-supplied via SysRandom [58] kernel calls. This kernel call can only be made by processes with the PROCMGR_AID_SRANDOM ability.
Of these sources, pid_unique and active[get_cpu_num()] have a limited range of possible values and none of the sources except for wakeup_timer, kernel_exit_count and random_seed can be considered secret. Some sources (eg. ClockCycles, kernel_exit_count) are also likely to have greatly reduced ranges during boot-time.
Finally, note that all sources are truncated to 32-bit values when stored in the seed block, that random_seed is only initialized when system integrators utilize it and that the final block (keypad[7]) is never initialized. As such, in many cases the theoretical maximum of the entropy contained within the seed block would be reduced to 192 bits. A full evaluation of the entropic quality of the QNX 7 kernel PRNG is left to future work.
5.3 /dev/random in QNX 7.0
Following our advisory on the QNX Yarrow PRNG, the QNX 7 random service was redesigned to use FORTUNA instead. While the design and interface of the random service remains mostly the same, QNX 7 uses a customized version of the HEIMDAL [32] FORTUNA implementation as illustrated in Figure 12.
The QNX 7 FORTUNA implementation no longer has dedicated boot- and runtime entropy collection routines and draws upon the following entropy sources:
• Interrupt Request Timing: This source is identical to the one in the QNX 6.6 RANDOM implementation.
• System Information Polling: This source is identical to the one in the QNX 6.6 RANDOM implementation.
• High-Performance Clock Timing: This source is identical to the one in the QNX 6.6 RANDOM implementation.
• Library Hardware Entropy Source: This source is identical to the one in the QNX 6.6 RANDOM implementation.
• User-Supplied Input: Anything written to the /dev/(u)random device is immediately absorbed into the PRNG state and, if seedfile state persistence is enabled, the state is saved as well. It is possible to shield this functionality with the -m mode option specifying permissions but by default the interface is world-writable which could possibly present an avenue for reseeding attacks.
• Seedfile Source: If specified, QNX 7 FORTUNA can load and save entropy from and to a 128-byte seedfile (done after writing to /dev/(u)random or automatically after 8192 reseedings). This file is owned by root with user read/write permissions only.
• Reseed Source: Reseed control is integrated into the fortuna_bytes and fortuna_init routines and thus checked periodically. It is implemented as shown in Listing 18. This routine is not likely to provide high quality reseed entropy considering that pid and uid do not change for the random service and that arc4random reads from /dev/random (creating a circular reseed loop) and uses the broken RC4 cipher.
Listing 18: QNX 7 fortuna_reseed
#define INIT_BYTES 128
int fortuna_reseed() {
...
uint32_t buf[INIT_BYTES / sizeof(uint32_t)];
int i;
int entropy_p = 0;
if (!init_done) abort();
for (i = 0; i < sizeof(buf)/sizeof(buf[0]); i++)
buf[i] = arc4random();
add_entropy(&main_state, (void *)buf, sizeof(buf));
entropy_p = 1;
pid_t pid = getpid();
add_entropy(&main_state, (void *)&pid, sizeof(pid));
struct timeval tv;
gettimeofday(&tv, NULL);
add_entropy(&main_state, (void *)&tv, sizeof(tv));
uid_t u = getuid();
add_entropy(&main_state, (void *)&u, sizeof(u));
return entropy_p;
Due to the elimination of dedicated boottime entropy harvesting and its rapid startup time, QNX 7 is likely to suffer from the "boottime entropy hole" (unless system integrators explicitly enable seed-files / state persistence) but we leave a full analysis of entropic quality to future work.
6 Conclusion
We reverse-engineered and analyze the exploit mitigations and secure random number generators of QNX ≤ 6.6 and 7.0 and found and reported a myriad of issues of varying degrees of severity. Table 11 presents an overview of the analyzed mitigations and RNGs, their issues and what versions are affected by them. Note that we have left proper RNG entropy quality and ASLR correlation attack evaluation of QNX 7's to future work and as such we can neither confirm nor rule out issues in this regard.
We can see that despite our disclosure of the issues affecting QNX 6.6 and subsequent fixes being drafted for the bulk of them, some of them remained in QNX 7.0. Regardless, General Availability (GA) patches are available for all issues affecting QNX ≤ 6.6 in Table 11 (naturally excluding those affecting QNX 7.0).
One striking observation is that while QNX clearly attempts to keep up with at least basic exploit mitigations as they have evolved in the general purpose world, the fact that it is a proprietary OS outside of the Linux, Windows and BSD lineages means that they cannot trivially port mitigations, patches and improvements from these operating systems. In addition, the relative lack of attention to QNX by outside security researchers is evident from the degree to which certain vulnerabilities and issues (such as the local information leaks or the "poor man's randomization patch" design for ASLR/SSP) resemble older vulnerabilities on other Unix-like systems. Finally, our findings re-confirm the notion that secure random number generation and especially integrating suitable entropy sources is an issue that continues to plague the embedded world. The impact of this goes beyond affecting the quality of exploit mitigations and has consequences for the wider security ecosystem as a whole.
It is our hope that this work inspires other security researchers to further investigate the security and OS internals of QNX and other closed-source embedded operating systems.
Bibliography
[34] Tobias Klein. checksec. URL: http://www.tra-positive.de/tools/checksec.html.
[38] NIST. NIST Entropy Source Testing (EST) tool. URL: https://github.com/usnistgov/SP800-90B_EntropyAssessment.
[41] NIST. “NIST SP800-90B: Recommendation for the Entropy Sources Used for Random Bit Generation”. In: NIST (2016).
|
{"Source-Url": "https://www.blackhat.com/docs/asia-18/asia-18-Wetzels_Abassi_dissecting_qnx__WP.pdf", "len_cl100k_base": 15505, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 88849, "total-output-tokens": 19591, "length": "2e13", "weborganizer": {"__label__adult": 0.0006909370422363281, "__label__art_design": 0.0006465911865234375, "__label__crime_law": 0.003421783447265625, "__label__education_jobs": 0.0006871223449707031, "__label__entertainment": 0.000186920166015625, "__label__fashion_beauty": 0.00026154518127441406, "__label__finance_business": 0.0008864402770996094, "__label__food_dining": 0.0003888607025146485, "__label__games": 0.0024471282958984375, "__label__hardware": 0.035736083984375, "__label__health": 0.0004467964172363281, "__label__history": 0.00045013427734375, "__label__home_hobbies": 0.0001926422119140625, "__label__industrial": 0.0017566680908203125, "__label__literature": 0.0003299713134765625, "__label__politics": 0.0006728172302246094, "__label__religion": 0.0006127357482910156, "__label__science_tech": 0.3154296875, "__label__social_life": 0.00010514259338378906, "__label__software": 0.0751953125, "__label__software_dev": 0.5576171875, "__label__sports_fitness": 0.0003402233123779297, "__label__transportation": 0.0012292861938476562, "__label__travel": 0.00017642974853515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73258, 0.0274]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73258, 0.31507]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73258, 0.85732]], "google_gemma-3-12b-it_contains_pii": [[0, 3116, false], [3116, 7826, null], [7826, 10591, null], [10591, 12622, null], [12622, 15680, null], [15680, 18833, null], [18833, 21099, null], [21099, 26546, null], [26546, 28943, null], [28943, 33774, null], [33774, 36976, null], [36976, 40675, null], [40675, 44245, null], [44245, 48011, null], [48011, 51060, null], [51060, 55471, null], [55471, 58233, null], [58233, 61477, null], [61477, 64572, null], [64572, 68731, null], [68731, 72766, null], [72766, 73258, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3116, true], [3116, 7826, null], [7826, 10591, null], [10591, 12622, null], [12622, 15680, null], [15680, 18833, null], [18833, 21099, null], [21099, 26546, null], [26546, 28943, null], [28943, 33774, null], [33774, 36976, null], [36976, 40675, null], [40675, 44245, null], [44245, 48011, null], [48011, 51060, null], [51060, 55471, null], [55471, 58233, null], [58233, 61477, null], [61477, 64572, null], [64572, 68731, null], [68731, 72766, null], [72766, 73258, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73258, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73258, null]], "pdf_page_numbers": [[0, 3116, 1], [3116, 7826, 2], [7826, 10591, 3], [10591, 12622, 4], [12622, 15680, 5], [15680, 18833, 6], [18833, 21099, 7], [21099, 26546, 8], [26546, 28943, 9], [28943, 33774, 10], [33774, 36976, 11], [36976, 40675, 12], [40675, 44245, 13], [44245, 48011, 14], [48011, 51060, 15], [51060, 55471, 16], [55471, 58233, 17], [58233, 61477, 18], [61477, 64572, 19], [64572, 68731, 20], [68731, 72766, 21], [72766, 73258, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73258, 0.08667]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
60118cf876b29c9122d55026fdcbc93b925215fe
|
Integrating and Extending JCSP
Peter WELCH a, Neil BROWN a, James MOORES b, Kevin CHALMERS c and Bernhard SPUTH d
a Computing Laboratory, University of Kent, Canterbury, Kent, CT2 7NF, UK.
b 23 Tunnel Avenue, London, SE10 0SF, UK.
c School of Computing, Napier University, Edinburgh, EH10 5DT, UK.
d Department of Engineering, University of Aberdeen, Scotland, AB24 3UE, UK.
Abstract. This paper presents the extended and re-integrated JCSP library of CSP packages for Java. It integrates the differing advances made by Quickstone’s JCSP Network Edition and the “core” library maintained at Kent. A more secure API for connecting networks and manipulating channels is provided, requiring significant internal re-structuring. This mirrors developments in the occam-π language for mandated direction specifiers on channel-ends. For JCSP, promoting the concept of channel-ends to first-class entities has both semantic benefit (the same as for occam-π) and increased safety. Major extensions include alting barriers (classes supporting external choice over multiple multi-way synchronisations), channel output guards (straightforward once we have the alting barriers), channel poisoning (for the safe and simple termination of networks or sub-networks) and extended rendezvous on channel communications (that simplify the capture of several useful synchronisation design patterns). Almost all CSP systems can now be directly captured with the new JCSP. The new library is available under the LGPL open source license.
Keywords. JCSP, Alting Barriers, Output Guards, Extended Rendezvous, Poison
Introduction
JCSP (Communicating Sequential Processes for Java) [1,2,3,4] is a library of Java packages providing a concurrency model that is a judicious combination of ideas from Hoare’s CSP [5] and Milner’s π-calculus [6]. It follows many of the principles of occam-π [7,8,9,10], exchanging compiler enforced security for programmer checked rules, losing some ultra-low process management overheads but winning the model for a mainstream programming language. Along with CTJ [11], JCSP is the forerunner of similar libraries for other environments – such as C++CSP [12], CTC++ [13] and the .NET CSP implementations [14,15].
JCSP enables the dynamic and hierarchic construction of process networks, connected by and synchronising upon a small set of primitives – such as message-passing channels and multiway events. Each process manages its own state and engages in patterns of communication with its environment (represented by channels, barriers etc.) that can be formally contracted (in CSP). Each process is independently constructed and tested without concern for multiprocessing side-effects – there is no need for locking mechanisms. In this way, our long developed skills for sequential design and programming transfer directly into concurrent design and programming. Whole system (multiprocessing) behaviour yields no surprises and can be analysed for bad behaviour (e.g. deadlock) formally, with the option of assistance from automated model checkers (such as FDR [16]). The model works unchanged whether the concurrency is internal to a single machine (including multicore architectures) or distributed across many machines (including workstation clusters and the Internet).
1Java is a trademark of Sun Microsystems
JCSP is an alternative concurrency model to the threads and monitor mechanisms built into Java. It is also compatible with it – indeed, it is currently implemented on top of it! With care, the two models can profitably be mixed\(^2\). Java 1.5 includes a whole new set of concurrency primitives – some at a very low level (e.g. the atomic swaps and counts). These also provide an alternative to threads and monitors. Depending on the relative overheads between the 1.5 and classical methods, it may be worthwhile re-implementing JCSP on the lowest level 1.5 primitives. Meanwhile, we are confident in the current implementation, which has been formalised and model checked [17].
JCSP was developed following WoTUG’s Java Threads Workshop [18] in 1996. Using ideas kicked around at that workshop [19], the first library (JCSP 0.5, [20]) was designed and put together by Paul Austin, a Masters student at Kent, some time in 1997. It has been under continuous development ever since by a succession of undergraduate/Masters/PhD students (Neil Fuller, Joe Aldous, John Foster, Jim Moores, David Taylor, Andrew Griffin) together with the present authors. A major undertaking was the spin-off of Quickstone Technologies Limited (QTL), that crafted the JCSP Network Edition. This enables the dynamic distribution of JCSP networks across any network fabric, with no change in semantics (compared with a single JVM version) – only a change in performance and the size of the system that can be run. Sadly, QTL is no more – but its work survives and is being re-integrated with the core version (which had made several independent advances, some reported here) to form the LGPL open-source new JCSP 1.1 release.
JCSP was designed for use with anything above and including Java 1.1. This compatibility with Java 1.1 has been maintained up to the current core release: JCSP 1.0-rc7. Given that most modern mobile devices support at least Java 1.3, we may relax this self-imposed constraint (and start, for example, using collection classes in the revised implementation). Other new mechanisms available in Java 1.5 (e.g. generics) and their binding into the future of JCSP are discussed in section 6.
In section 1 of this paper, we describe and motivate small changes in API and the refactoring of the channel classes and interfaces resulting from the merger of the JCSP Network Edition and JCSP 1.0-rc7. Section 2 presents the alting barriers that are completely new for JCSP, together with some implementation details. Section 3 shows how these facilitate channels that allow output guards in external choice (alting). The addition of extended rendezvous to JCSP is given in section 4, including how this works with buffered channels of various kinds. Section 5 presents the addition of channel poisoning for the safe and simple termination of networks (or sub-networks). Finally, Section 6 considers opportunities for the future of JCSP.
1. **Class Restructure**
1.1. **JCSP 1.0-rc7**
In JCSP 1.0-rc7, there are two interfaces for channel-ends: ChannelInput and ChannelOutput. There is also the abstract class AltingChannelInput, which extends the abstract class Guard\(^3\) and the interface ChannelInput and enables channels to be used as input guards in external choice (alting). All this remains in JCSP 1.1.
---
\(^{2}\)For straightforward management of a shared resource, we have sometimes employed direct visibility with synchronized blocks to serialise access – rather than accept the overheads of a very simple server process. For more sophisticated management, we would always use a process. Using and reasoning about an object’s \texttt{wait}, \texttt{notify} and \texttt{notifyAll} methods should be avoided at all costs!
\(^{3}\)This defines a public type with a set of method headers visible and used only within \texttt{org.jcsp.lang} – sadly, Java does not permit such things in an interface.
JCSP 1.0-rc7 channel classes, such as One2OneChannel, implement the AltingChannelInput and ChannelOutput classes/interfaces and all the corresponding methods. Processes take channel-end types, such as ChannelOutput or AltingChannelInput, as arguments to their constructor. Actual channel instances are passed directly to these constructors – with Java implicitly casting them down to the expected interface types.
This structure allows misuse: a process, having been given a ChannelInput, can cast it to a ChannelOutput – and vice-versa! Such tricks do enable a channel to be used in both directions, but would probably lead to tears. They are prevented in JCSP 1.1.
Classical zero-buffered fully synchronising channels are provided along with a variety of buffered versions (blocking, overwriting, overflowing). Zero-buffered channels are implemented with a different (and faster) logic than the buffered ones. A memory inefficient feature of the JCSP 1.0-rc7 implementation is that the buffered channels sub-class the zero-buffered classes, although that is not relevant (or visible) to the API. So, buffered classes retain fields relevant only to the unused superclass logic. This does not happen in JCSP 1.1.
1.2. JCSP Network Edition
In the JCSP Network Edition, the channel-end interfaces and abstract classes are the same as above. There are also extended interfaces, SharedChannelInput and SharedChannelOutput, that do not reveal any extra functionality but indicate that the given channel-end can be safely shared (internally) between multiple concurrent sub-processes. Channels with unshared ends, such as One2OneChannel, cannot be plugged into them.
A significant change is that channels, such as One2OneChannel and Any2OneChannel, are now interfaces (not classes) with two methods: in() for extracting the reading-end and out() for the writing-end. Implementations of these channel-end interfaces are package-only known classes returned by static methods of the Channel class (or actual instances of class factories, such as StandardChannelFactory).
In fact, those package-only known channel-end implementing classes are the same as the package-only known classes implementing channels – so, processes can still cast channel inputs to outputs and vice-versa!
1.3. JCSP 1.1
JCSP 1.1 merges the two libraries. Channel-end interfaces and abstract classes remain the same. Channels themselves are interfaces, as in the JCSP Network Edition. This time, however, channel-end implementations are package-only known classes that delegate their methods to different package-only known classes implementing the channels. Further, the input-end implementing classes are different from the output-end classes. So, input-ends and output-ends can no longer be cast into each other. Apart from this improvement in security, the change is not apparent and the API remains the same as that for JCSP Network Edition.
Users of the library are only exposed to interfaces (or abstract classes) representing the functionality of channels and channel-ends. Implementation classes are completely hidden. This also allows for easier future changes without affecting the visible API.
1.4. Using Channels from within a Process
The JCSP process view of its external channels is unchanged. Here is a simple, but fair, multiplexor:
```java
public final class FairPlex implements CSProcess {
private final AltingChannelInput[] in;
private final ChannelOutput out;
```
public FairPlex (AltingChannelInput[] in, ChannelOutput out) {
this.in = in;
this.out = out;
}
public void run () {
final Alternative alt = new Alternative (in);
while (true) {
final int i = alt.fairSelect ();
out.write (in[i].read ());
}
}
1.5. Building Networks of Processes
To build a network, channels must be constructed and used to wire together (concurrently running) process instances. In JCSP 1.0-rc7, channels were directly plugged into processes. Now, as in occam-π and the JCSP Network Edition, we must specify which ends of each channel to use.
All channels are now constructed using static methods of the Channel class (or an instance of one the specialist channel factories):
final One2OneChannel[] a = Channel.one2oneArray (N); // an array of N channels
final One2OneChannel b = Channel.one2one (); // a single channel
Here is a network consisting of an array of Generator processes, whose outputs are multiplexed through Fairplex to a Consumer process⁴. They are connected using the above channels:
final Generator[] generators = new Generator[N];
for (int i = 0; i < N; i++) {
generators[i] = new Generator (i, a[i].out ());
}
final FairPlex plex = new FairPlex (Channel.getInputArray (a), b.out ());
final Consumer consumer = new Consumer (b.in ());
new Parallel (new CSProcess[] {new Parallel (generators), plex, consumer}).run ();
In JCSP 1.0-rc7, the actual channels (a and b) are passed to the process constructors. Now, we must pass the correct ends. The input-end of a channel is extracted using the in() method; the output-end using out()⁵. FairPlex needs an array of channel input-ends, which we could have constructed ourselves, applying in() to the individual channel elements. However, this is simplified through the static helper methods, getInputArray() and getOutputArray(), provided by the Channel factory.
⁴This example is to illustrate the use of channels, including channel arrays, in network construction. If we really only need fair and straightforward multiplexing of individual messages, it would be much simpler and more efficient to connect the generators directly to the consumer using a single Any2OneChannel.
⁵These correspond to the direction specifiers (?) and !) mandated by occam-π. The method names in() and out() must be interpreted from the point of view of the process – not the channel. The input-end is the end of the channel from which a process inputs messages – not the end of the channel into which message are put. JCSP is a process-oriented model and our terms are chosen accordingly.
2. Alting Barriers
JCSP has long provided a Barrier class, on which multiple processes can be enrolled. When one process attempts to synchronise on a barrier, it blocks until all enrolled processes do the same thing. When the last arrives at the barrier, all processes are released. They allow dynamic enrollment and resignation, following mechanisms introduced into occam-π [8,21].
This corresponds to fundamental multiway event synchronisation in CSP. However, although CSP allows processes to offer multiway events as part of an external choice, JCSP does not permit this for Barrier synchronisation. Once a process engages with a Barrier, it cannot back off (e.g. as a result of a timeout, an arriving channel communication or another barrier). The reason is the same as why channel output guards are not allowed. Only one party to any synchronisation is allowed to withdraw (i.e. to use that synchronisation as a guard in external choice – alting). This enables event choice to be implemented with a simple (and fast) handshake from the party making the choice to its chosen partner (who is committed to waiting). Relaxing this constraint implies resolving a choice on which all parties must agree and from which anyone can change their mind (after initially indicating approval). In general, this requires a two-phase commit protocol, which is costly and difficult to get right [22].
This constraint has been universally applied in all practical CSP implementations to date. It means that CSP systems involving external choice over multiway events cannot, generally, be directly executed. Instead, those systems must be transformed (preserving their semantics) into those meeting the constraints – which means adding many processes and channels to manage the necessary two-phase commit.
JCSP 1.0-rc7 and 1.1 introduce the AltingBarrier class that overcomes that constraint, allowing multiple barriers to be included in the guards of an Alternative – along with skips, timeouts, channel communications and call channel accepts. Currently, this is supported only for a single JVM (which can be running on a multicore processor). It uses a fast implementation that is not a two-phase commit. It has overheads that are linear with respect to the number of barrier offers being made. It is based on the Oracle mechanism described at [23,24,25] and summarised in section 2.5.
2.1. User View of Alting Barriers
An alting barrier is represented by a family of AltingBarrier front-ends. Each process using the barrier must do so via its own front-end – in the same way that a process uses a channel via its channel-end. A new alting barrier is created by the static create method, which returns an array of front-ends – one for each enrolled process. If additional processes need later to be enrolled, further front-ends may be made from an existing one (through expand and contract methods). As with the earlier Barrier class, processes may temporarily resign from a barrier and, later, re-enrol.
To use this barrier, a process simply includes its given AltingBarrier front-end in a Guard array associated with an Alternative. Its index will be selected if and only if all parties (processes) to the barrier similarly select it (using their own front-ends).
If a process wishes to commit to this barrier (i.e. not offer it as a choice in an Alternative), it may sync() on it. However, if all parties only do this, a non-alting Barrier would be more efficient. A further shortcut (over using an Alternative) is provided to poll (with timeout) this barrier for completion.
An AltingBarrier front-end may only be used by one process at a time (and this is checked at run-time). A process may communicate a non-resigned front-end to another process; but the receiving process must mark it before using it and, of course, the sending process must not continue to use it. If a process terminates holding a front-end, it may be recycled for use by another process via a reset.
Full details of expanding/contracting the set of front-ends, temporary resignation and re-enrolment, communication, marking and resetting of front-ends, committed synchronisation and time-limited polling are given in the JCSP documentation (on-line at [26]).
2.2. Priorities
These do not – and cannot – apply to selection between barriers. The\horn\Select\horn() method works locally for the process making the offer. If this were allowed, one process might offer barrier x with higher priority than barrier y ... and another process might offer them with its priorities the other way around. In which case, it would be impossible to resolve a choice in favour of x or y in any way that satisfied the conflicting priorities of both processes.
However, the \Select\horn() method is allowed for choices including barrier guards. It honours the respective priorities defined between non-barrier guards ... and those between a barrier guard and non-barrier guards (which guarantees, for example, immediate response to a timeout from ever-active barriers). Relative priorities between barrier guards are inoperative.
2.3. Misuse
The implementation defends against misuse, throwing an \BarrierError when riled. Currently, the following bad things are prevented:
- different threads trying to operate on the same front-end;
- attempt to enrol whilst enrolled;
- attempt to use as a guard whilst resigned;
- attempt to sync, resign, expand, contract or mark whilst resigned;
- attempt to contract with an array of front-ends not supplied by expand.
Again, we refer to the documentation, [26], for further details and explanation.
2.4. Example
Here is a simple gadget with two modes of operation, switched by a click event (operated externally by a button in the application described below). Initially, it is in individual mode – represented here by incrementing a number and outputting it (as a string to change the label on its controlling button) as often as it can. Its other mode is group, in which it can only work if all associated gadgets are also in this mode. Group work consists of a single decrement and output of the number (to its button’s label). It performs group work as often as the group will allow (i.e. until it, or one of its partner gadgets, is clicked back to individual mode).
import org.jcsp.lang.*;
public class Gadget implements CSProcess {
private final AltingChannelInput click;
private final AltingBarrier group;
private final ChannelOutput configure;
public Gadget (
AltingChannelInput click, AltingBarrier group, ChannelOutput configure
) {
this.click = click;
this.group = group;
this.configure = configure;
}
}
public void run () {
final Alternative clickGroup =
new Alternative (new Guard[] {click, group});
final int CLICK = 0, GROUP = 1; // indices to the Guard array
int n = 0;
configure.write (String.valueOf (n));
while (true) {
configure.write (Color.green) // indicate mode change
while (!click.pending ()) { // individual work mode
n++; // work on our own
configure.write (String.valueOf (n)); // work on our own
}
click.read (); // must consume the click
configure.write (Color.red); // indicate mode change
boolean group = true; // group work mode
while (group) {
switch (clickGroup.priSelect ()) { // offer to work with the group
case CLICK:
click.read (); // must consume the click
group = false; // back to individual work mode
break;
case GROUP:
n--; // work with the group
configure.write (String.valueOf (n)); // work with the group
break;
}
}
}
}
The front-end to the alting barrier shared by other gadgets in our group is given by the group parameter of the constructor, along with click and configure channels from and to our button process.
Note that in the above – and for most uses of these alting barriers – no methods are explicitly invoked. Just having the barrier in the guard set of the Alternative is sufficient.
This gadget's offer to work with the group is made by the priSelect() call on clickGroup. If all other gadgets in our group make that offer before a mouse click on our button, this gadget (together with all those other gadgets) proceed together on their joint work – represented here by decrementing the count on its button’s label. All gadgets then make another offer to work together.
This sequence gets interrupted if any button on any gadget gets clicked. The relevant gadget process receives the click signal and will accept it in preference to further group synchronisation. The clicked gadget reverts to its individual mode of work (incrementing the count on its button’s label), until that button gets clicked again – when it will attempt to rejoin the group. While any gadget is working on its own, no group work can proceed.
Here is complete code for a system of buttons and gadgets, synchronised by an *alting barrier*. Note that this *single* event needs an *array* of `AltingBarrier` front-ends to operate – one for each gadget:
```java
import org.jcsp.lang.*;
public class GadgetDemo {
public static void main (String[] argv) {
final int nUnits = 8;
// make the buttons
final One2OneChannel[] event = Channel.one2oneArray (nUnits);
final One2OneChannel[] configure = Channel.one2oneArray (nUnits);
final boolean horizontal = true;
final FramedButtonArray buttons =
new FramedButtonArray ("AltingBarrier: GadgetDemo", nUnits, 120, nUnits*100,
horizontal, configure, event);
// construct an array of front-ends to a single alting barrier
final AltingBarrier[] group = AltingBarrier.create (nUnits);
// make the gadgets
final Gadget[] gadgets = new Gadget[nUnits];
for (int i = 0; i < gadgets.length; i++) {
gadgets[i] = new Gadget (event[i], group[i], configure[i]);
}
// run everything
new Parallel (new CSPProcess[] {
buttons, new Parallel (gadgets)
}).run ();
}
}
```
This example only contains a single alting barrier. The JCSP documentation [26] provides many more examples – including systems with intersecting sets of processes offering multiple multiway barrier synchronisations (one for each set to which they belong), together with timeouts and ordinary channel communications. There are also some *games*!
2.5. Implementation Oracle
A fast resolution mechanism of choice between multiple multiway synchronisations depends on an *Oracle* server process, [23,24,25]. This maintains information for each barrier and
each process enrolled. A process offers atomically a set of barriers with which it is prepared to engage and blocks until the Oracle tells it which one has been breached. The Oracle simply keeps counts of, and records, all the offer sets as they arrive. If a count for a particular barrier becomes complete (i.e. all enrolled processes have made an offer), it informs the lucky waiting processes and atomically withdraws all their other offers – before considering any new offers.
2.5.1. Adapting the Oracle for JCSP (and occam-π)
For JCSP, these mechanics need adapting to allow processes to make offers to synchronise that include all varieties of Guard – not just AltingBarriers. The logic of the Oracle process is also unravelled to work with the usual enable/disable sequences implementing the select methods invoked on Alternative. Note: the techniques used here for JCSP carry over to a similar notion of alting barriers for an extended occam-π [27].
The AltingBarrier.create(n) method first constructs a hidden base object – the actual alting barrier – before constructing and returning an array of AltingBarrier front-ends. These front-ends reference the base and are chained together. The base object is not shown to JCSP users and holds the first link to the chain of front-ends. It maintains the number of front-ends issued (which it assumes equals the number of processes currently enrolled) and a countdown of how many offers have not yet been made to synchronise. It has methods to expand and contract the number of front-ends and manage temporary resignation and re-enrolment of processes. Crucially, it implements the methods for enabling (i.e. receiving an offer to synchronise) and disabling (i.e. answering an enquiry as to whether the synchronisation has completed and, if not, withdrawing the offer). These responsibilities are delegated to it from the front-end objects.
Each AltingBarrier front-end maintains knowledge of the process using it (thread id and resigned status) and checks that it is being operated correctly. If all is well, it claims the monitor lock on the base object and delegates the methods. Whilst holding the lock, it maintains a reference to the Alternative object of its operating process (which might otherwise be used by another process, via the base object, upon a successful completion of the barrier).
The Oracle logic works because each full offer set from a process is handled atomically. The select methods of Alternative make individual offers (enables) from its guard array in sequence. A global lock, therefore, must be obtained and held throughout any enable sequence involving an AltingBarrier – to ensure that the processing of its set of offers (on AltingBarriers) are not interleaved with those from any other set. If the enables all fail, the lock must be released before the alting process blocks. If an offer (enable) succeeds in completing one of the barriers in the guard set, the lock must continue to be held held throughout the subsequent disable (i.e. withdraw) sequence and the disable sequences of all the other partners in the successful barrier (which will be scheduled by the successful enable)6. Other disable sequences (i.e. those triggered by a successful non-barrier synchronisation) do not need to acquire this lock – even if an alting barrier is one of the guards to be disabled.
2.5.2. Distributing the Oracle
The current JCSP release supports AltingBarriers only within a single JVM. Extending this to support them across a distributed system has some issues.
A simple solution would be to install an actual Oracle process at a network location known to all. At the start of any enable sequence, a network-wide lock on the Oracle is obtained (simply by communicating with it on a shared claim channel). Each enabledisable then becomes a communication to and from the Oracle. The network lock is released follow-
---
6This means that multiple processes will need to hold the lock in parallel, so that a counting semaphore (rather than monitor) has to be employed.
ing the same rules outlined for the single JVM (two paragraphs back). However, the network overheads for this (per enable/disable) and the length of time required to hold the network-wide lock look bad.
A better solution may be to operate the fast Oracle logic locally within each JVM – except that, when a local barrier is potentially overcome (because all local processes have offered to engage with it), the local JCSP kernel negotiates with its partner nodes through a suitable two-phase commit protocol. This allows the local kernel to cancel safely any network offer, should local circumstances change. Only if the network negotiation succeeds are the local processes informed.
2.5.3. Take Care
The logic required for correct implementation of external choice (i.e. the Alternative class) is not simple. The version just for channel input synchronisation required formalising and model checking before we got it right [17]. Our implementation has not (yet) been observed to break under stress testing, but we shall not feel comfortable until this has been repeated for these multiway events. Full LGPL source codes are available by request.
3. Output Guards
It has long been an accepted constraint of occam-π and its derivative frameworks (e.g. JCSP, C++CSP, the CSP implementations for .NET) that channels only support input guards for use in alternatives, and not output guards. The decision allows a much faster and simpler implementation for the languages/frameworks [23].
Now, however, alting barriers provide a mechanism on which channels with both input and output guards can easily be built, as described in [22]. Because there are still extra run-time costs, JCSP 1.1 offers a different channel for this – for the moment christened One2OneChannelSymmetric.
This symmetric channel is composed of two internal synchronisation objects: one standard non-buffered one-to-one channel and one alting barrier. Supporting this, a new channel-end interface (actually abstract class), AltingChannelOutput, has been added and derives simply from Guard and ChannelOutput. We are only providing zero-buffered one-to-one symmetrically alting channels for the moment.
The reading and writing processes are the only two enrolled on the channel’s internal barrier – on which, of course, they can alt.
For any committed communication, a process first commits to synchronise on the internal barrier. When/if that synchronisation completes, the real communication proceeds on the internal one-to-one channel as normal.
If either process wants to use the channel as a guard in an alternative, it offers to synchronise on the internal barrier – an offer that can be withdrawn if one of the other guards fires first. If its offer succeeds, the real communication proceeds on the internal channel as before.
Of course, all these actions are invisible to the using processes. They use the standard API for obtaining channel-ends and reading and writing. Either channel-end can be included in a set of guards for an Alternative.
Here is a pathological example of its use. There are two processes, A and B, connected by two opposite direction channels, c and d. From time to time, each process offers to communicate on both its channels (i.e. an offer to read and an offer to write). They do no other communication on those channels. What must happen is that the processes resolve their choices in compatible ways – one must do the writing and the other the reading. This is, indeed, what happens. Here is the A process:
class A implements CSProcess {
private final AltingChannelInput in;
private final AltingChannelOutput out;
... standard constructor
public void run () {
final Alternative alt = new Alternative (new Guard[] {in, out});
final int IN = 0, OUT = 1;
... other local declarations and initialisation
while (running) {
... set up outData
switch (alt.fairSelect ()) {
case IN:
inData = (InDataType) in.read ();
... reaction to this input
break;
case OUT:
out.write (outData);
... reaction to this output
break;
}
}
}
}
The B process is the same, but with different initialisation and reaction codes and types. The system must be connected with symmetric channels:
public class PathologicalDemo {
public static void main (String[] argv) {
final One2OneChannelSymmetric c = Channel.one2oneSymmetric ();
final One2OneChannelSymmetric d = Channel.one2oneSymmetric ();
new Parallel (new CSProcess[] {
new A (c.in (), d.out ()),
new B (d.in (), c.out ())
}).run ();
}
}
4. Extended Rendezvous
Extended rendezvous was an idea originally introduced in occam-π [28]. After reading from a channel, a process can perform some actions without scheduling the writing process – extending the rendezvous between writer and reader. When it has finished those actions (and it can take its own time over this), it must then schedule the writer. Only the reader may perform this extension, and the writer is oblivious as to whether it happens.
Extended rendezvous is made available in JCSP through the `ChannelInput.startRead()` and `ChannelInput.endRead()` methods. The `startRead()` method starts the extended rendezvous, returning with a message when the writer sends it. The writer now remains blocked (engaged in the extended rendezvous) until, eventually, the reader invokes the `endRead()` method. They can be used in conjunction with **alternation** – following the (input) channel’s selection, simply invoke `startRead()` and `endRead()` instead of the usual `read()`.
### 4.1. Examples – a Message Logger and Debugging GUI
Consider the (unlikely) task of tracking down an error in a JCSP system. We want to delay and/or observe values sent down a channel. We could insert a special process into the channel to manage this, but that would normally introduce buffering into the system. In turn, that changes the synchronisation behaviour of the system which could easily mask the error – especially if that error was a deadlock.
However, if the inserted process were to use extended rendezvous, we can arrange for there to be no change in the synchronisation. For example, the following **channel tapping** process might be used for this task:
```java
class Tap implements CSProcess {
private ChannelInput in; // from the original writer
private ChannelOutput out; // to the original reader
private ChannelOutput tapOut; // to a message logger
... standard constructor
public void run () {
while (true) {
Cloneable message = in.startRead(); // start of extended rendezvous
{
tapOut.write(message.clone());
out.write(message);
}
in.endRead(); // finish of extended rendezvous
}
}
}
```
This process begins an extended rendezvous, copies the message to its **tapping** channel before writing it to the process for which it was originally intended. Only when this communication is complete does the extended rendezvous end. So long as the report to the message logger is guaranteed to succeed, this preserves the synchronisation between the original two processes: the original writer is released if-and-only-if the reader reads.
The extra code block and indentation in the above (and below) example is suggested to remind us to invoke the `endRead()` method, matching the earlier `startRead()`.
Instead of a message logger, we could install a process that generates a GUI window to display passing messages. As these messages are only held during the extended rendezvous of Tap, that process no longer needs to clone its messages. For example:
```java
class MessageDisplay implements CSProcess {
private ChannelInput in; // from the tap process
... standard constructor
```
public void run () {
while (true) {
Object message = in.startRead (); // start of extended rendezvous
{
... display message in a pop-up message box
... only return when the user clicks OK
}
in.endRead (); // finish of extended rendezvous
}
}
Instead of performing communication in its extended rendezvous, the above process interacts with the user through a GUI. The rendezvous is not completed until the user has seen the data value and clicked OK. This in turn delays the tap process until the user clicks OK, which in turn prevents the original communication between the original two processes until the user has clicked OK.
The addition of these two processes has not altered the semantics of the original system – apart from giving the GUI user visibility of, and delaying ability over, communications on the tapped channel.
With trivial extra programming (e.g. writing a null to the tapping channel at the end of the extended rendezvous in Tap), the MessageDisplay could also clear its message box when the reader process takes the message. If this were done for all channels, a deadlocked system would show precisely where messages were stuck.
Such advanced debugging capabilities can be built entirely with the public API of JCSP. There is no need to delve into the JCSP implementation.
4.2. Rules
The endRead() function must be called exactly once after each call to startRead(). If the reader poisons the channel (section 5) between a startRead() and endRead(), the channel will be poisoned; but the current communication is deemed to have happened (which, indeed, it has) and no exception is thrown. In fact, endRead() will never throw a poison exception. Poison is explained in section 5.
4.3. Extended Rendezvous on Buffered Channels
Extended rendezvous and buffered channels have not previously been combined. occam-π, which introduced the extended rendezvous concept, does not support buffered channels. C++CSP originally disallowed extended rendezvous on buffered channels using a badly-designed exception7. To distinguish between channel-ends that did, and did not, support extended rendezvous, a more complicated type system would have been necessary. In addition to AltingChannelInput and ChannelInput, we would need AltingExtChannelInput and ExtChannelInput. Similarly, there would need to be two more classes for the shared versions.
Instead, we took the decision to allow extended rendezvous on buffered channels, thereby eliminating any divide. The semantics of extended rendezvous on a buffered channel are dependent on the semantics of the underlying buffer. The semantics for (some of) the standard buffers provided with JCSP are explained in the following sub-sections.
7 In the new C++CSP2 [29], the classes have been restructured and the implementation is identical to the new JCSP implementation described here
4.3.1. Blocking FIFO Buffers
The reasoning behind the implemented behaviour of extended rendezvous on FIFO buffered channels with capacity $N$ comes from the semantically equivalent pipeline of $N$ ’id’ processes (i.e. one-place blocking buffers) connected by non-buffered channels. When an extended rendezvous is begun by the process reading from the buffered channel, the first available (that is, the oldest) item of data is read from the channel, but not removed from its internal buffer. If no item of data is available, the process must block. Data is only removed from the channel buffer when the extended rendezvous is completed. This mirrors the semantics of an extended rendezvous on the (unbuffered) output channel of the one-place buffer pipeline.
4.3.2. Overwriting (Oldest) Buffers
When full, writing to these channels does not block – instead, the new data overwrite the oldest data in the channel. Thus, the channel always holds the freshest available data – which is important for real-time (and other) systems.
There is no simple equivalent of such an overwriting buffer made from unbuffered channels, so we have no simple guidance for its semantics. Instead we choose to follow the principle of least surprise. As with the FIFO buffers, when an extended rendezvous begins, the least recent data item is read from the buffer but not removed. At any time, the writer writes to the buffer as normal, overwriting data when full – the first such one overwritten being the data just read. When the extended rendezvous completes, the data item is removed – unless that data ‘slot’ has indeed been overwritten. This requires the channel buffer to keep track of whether the data being read in an extended rendezvous has been overwritten or not.
An overwriting buffered channel breaks most of the synchronisation between reader and writer. The writer can always write. The reader blocks when nothing is in the channel, but otherwise obtains the latest data and must accept that some may have been missed. Extended rendezvous is meant to block the writer for a period after a reader has read its message – but the writer must never block!
The above implementation yields what should happen if the writer had come along after the extended rendezvous had completed. Since the writer’s behaviour is independent from the reader in this case, we take the view that an earlier write (during the rendezvous) is a scheduling accident that should have no semantic impact – i.e. that it is proper to ignore it.
4.3.3. Zero Buffers
Extended rendezvous on a channel using a ZeroBuffer is, of course, identical to extended rendezvous on a normal unbuffered channel.
5. Poison and Graceful Termination
In [30], a general algorithm for the deadlock-free termination (and resetting) of CSP/occam networks (or sub-networks) was presented. This worked through the distribution of poison messages, resulting in poisoned processes having to take a defined set of termination actions (in addition to anything needed for process specific tidyness). This logic, though simple, was tedious to implement (e.g. in extending the channel protocol to introduce poison messages). Furthermore, the poison could not distribute against the flow of its carrying channels, so special changes had to be introduced to reach processes upstream.
The poison presented here applies to channels rather than processes – and it can spread upstream. When a channel is poisoned, any processes waiting on the channel are woken up and a poison exception thrown to each of them. All future reads/writes on the channel result in a poison exception being thrown – there is no antidote! Further attempts to poison the channel are accepted but ignored. This idea was originally posted by Gerald Hilderink [31].
Poison is used to shutdown a process network – simply and gracefully, with no danger of deadlock. For example, processes can set a single poison exception catch block for the whole of their normal operation. The handler responds just by poisoning all its external channels. It doesn’t matter whether any of them have already been poisoned.
Poison spreads around a process network viewed as an undirected graph, rather than trying to feed poison messages around a directed graph. These ideas have already been implemented in C++CSP, and by Sputh and Allen for JCSP itself [32]. This revised JCSP 1.1 poison builds on these experiences.
5.1. API Rationale
One option for adding poison to JCSP would have been to add poisonous channel-ends as separate additional interfaces. This would cause a doubling in the number of channel-end interfaces for JCSP. The reasoning presented in [33] still holds; a separation of poisonous and non-poisonable channel-ends in the type system would lead to complex common processes, that would need to be re-coded for each permutation of poisonous and non-poisonable channel-ends. Therefore, all channel-ends have poison(strength) methods.
Although all channel-ends have the poison methods, they do not have to be functional. Some channels do not permit poisoning – for example, the default ones: attempts to poison them are ignored.
5.2. Poison Strength
In [32], Sputh and Allen proposed the idea of two levels of poison – local and global. Channels could be constructed immune to local poison. Thus, networks could be built with sub-networks connected only by local-immune channels. Individual sub-networks could then be individually terminated (and replaced) by one of their components injecting local poison. Alternatively, the whole system could be shut down by global poison.
These ideas have been generalised to allow arbitrary (positive integer) levels of poison in JCSP 1.1. This allows many levels of nested sub-network to be terminated/reset at any of its levels. Poisonable channels are created with a specific level of immunity: they will only be poisoned with a poison whose strength is greater than their immunity. Poison exceptions carry the strength with which the channel has been poisoned: their handlers propagate poison with that same strength.
Channels carry the current strength of poison inside them: zero (poison-free) or greater than their immunity (poisoned). That strength can increase with subsequent poisoning, but is not allowed to decrease (with a weaker poison).
Note that using different strengths of poison can have non-deterministic results. For example, if different waves of poison, with different strengths, are propagating in parallel over part of a network whose channels are not immune, the strength of the poison exception a process receives will be scheduling dependent – which wave struck first! If a lower strength were received, it may fail to propagate that poison to some of its (more immune) channels before it terminates: without, of course, dealing with the stronger poison arriving later. Care is needed here.
5.3. Trusted and Untrusted Poisoners
Channel-ends of poisonous channels can be created specifically without the ability to poison (as in C++CSP [34]): attempts will be ignored (as if their underlying channel were not poisonous). Disabling poisoning at certain channel-ends of otherwise poisonous channels allows networks to be set up with trusted and untrusted poisoners. The former (e.g. a server process) has the ability to shut down the network. The latter (e.g. remote clients) receive the network poisoning but cannot initiate it.
5.4. Examples
Here is a standard running-sum integrator process, modified to support network shutdown after poisoning:
```java
public class IntegrateInt implements CSProcess {
private final ChannelInput in;
private final ChannelOutput out;
public IntegrateInt (ChannelInput in, ChannelOutput out) {
this.in = in;
this.out = out;
}
public void run () {
try {
int sum = 0;
while (true) {
sum += in.read ();
out.write (sum);
}
} catch (PoisonException e) { // poison everything
int strength = e.getStrength ();
out.poison (strength);
in.poison (strength);
}
}
}
```
A guard for a channel is considered ready if the channel is poisoned. This poison will only be detected, however, if the channel is selected and the channel communication attempted. Here is a modification of the FairPlex process (from section 1.4) to respond suitably to poisoning. The only change is the addition of the try/catch block in the run() method:
```java
public final class FairPlex implements CSProcess {
private final AltingChannelInput[] in;
private final ChannelOutput out;
... standard constructor
public void run () {
try {
final Alternative alt = new Alternative (in);
while (true) {
final int i = alt.fairSelect ();
out.write (in[i].read ());
}
} catch (PoisonException e) { // poison everything
int strength = e.getStrength ();
out.poison (strength);
for (int i = 0; i < in.length; i++) {
in[i].poison (strength);
}
}
}
}
```
If the out channel is poisoned, the poison exception will be thrown on the next cycle of FairPlex. If any of the in channels is poisoned, its guard becomes ready straight away. This may be ignored if there is traffic from unpoisoned channels available and FairPlex will continue to operate normally. However, the fair selection guarantees that no other input channel will be serviced twice before that poisoned (and ready) one. In the worst case, this will be after (in.length - 1) cycles. When the poisoned channel is selected, the exception is thrown.
5.5. Implementation
The central idea behind adding poison to all the existing channel algorithms is simple. Every time a channel wakes up from a wait, it checks to see whether the channel is poisoned. If it is, the current operation is abandoned and a PoisonException (carrying the poison strength) is thrown.
However, with just the above approach, it would be possible for a writing process (that was late in being rescheduled) to observe poison added by a reader after the write had completed successfully. This was discovered (by one of the authors [35]) from formalising and (FDR [16]) model checking this (Java) implementation against a more direct CSP model, using techniques developed from [17].
Therefore, an extra field is added so that a successfully completed communication is always recorded in the channel, regardless of any poison that may be injected afterwards. Now, the writer can complete normally and without exception – the poison remaining in the channel for next time. This correction has been model checked [35]. It has also been incorporated in the revised C++CSP [36].
6. Conclusions and Future Work
The latest developments of JCSP have integrated the JCSP Network Edition and JCSP 1.0-rc7, keeping the advances each had made separately from their common ancestor. New concepts have been added: choice between multiple multiway synchronisations (alting barriers), output guards (symmetric channels), extended rendezvous and poison. The revised library is LGPL open sourced. We are working on further re-factorings to allow third parties to add new altable synchronisation primitives, without needing to modify existing sources. We list here a few extensions that have been requested by various users and are likely for future releases. Of course, with open source, we would be very pleased for others to complete these with us.
6.1. Broadcast Channels
Primitive events in CSP may synchronise many processes. Channel communications are just events and CSP permits any number of readers and writers. Many readers implies that all readers receive the same message: either all receive or none receive – this is multiway synchronisation. Many writers is a little odd: all must write the same message or no write can occur – still multiway synchronisation.
All channels currently in JCSP restrict communications to point-to-point message transfers between one writer and one reader. The any channels allow any number of writers and/or readers, but only one of each can engage in any individual communication.
Allowing CSP many-reader (broadcasting) channels turns out to be trivial – so we may as well introduce them. The only interesting part is making them as efficient as possible.
One way is to use a process similar to DynamicDelta from org.jcsp.plugNplay. This cycles by waiting for an input and, then, outputting in parallel on all output channels. That in-
roduces detectable buffering which is easily eliminated by combining the input and outputs in an extended rendezvous (Section 4). We still do not have multiway synchronisation, since the readers do not have to wait for each other to take the broadcast. This can be achieved by the \emph{delta} process outputting twice and the readers reading twice. The first message can be \texttt{null} and is just to assemble the readers. Only when everyone has taking that is the real message sent. Getting the second message tells each reader that every reader is committed to receive. The \emph{delta} process can even send each message \emph{in sequence} to its output channels, reducing overheads (for unicore processors).
The above method has problems if we want to allow alting on the broadcast. Here is a simpler and faster algorithm that shows the power of \emph{barrier synchronisation} – an obvious mechanism, in retrospect, for broadcasting!
```java
public class One2ManyChannelInt
private int hold;
private final Barrier bar;
public One2ManyChannelInt (final int nReaders) {
bar = new Barrier (nReaders + 1);
}
public void write (int n) { -- no synchronized necessary
hold = n;
bar.sync (); -- wait for readers to assemble
bar.sync (); -- wait for readers to read
}
public int read () { -- no synchronized necessary
bar.sync (); -- wait for the writer and other readers
int tmp = hold;
bar.sync (); -- we've read it!
return tmp;
}
}
```
The above \emph{broadcasting channel} supports only a fixed number of readers and no alting. This is easy to overcome using the dynamics of an \texttt{AltingBarrier}, rather than \texttt{Barrier} – but is left for another time. For simplicity, the above code is also not \emph{dressed} in the full JCSP mechanisms for separate channel-ends, poisoning etc.. It also carries integers. Object broadcasting channels had better be carefully used! Probably, only \emph{immutable} objects (or clones) should be broadcast. Otherwise, the readers should only ever read (never change) the objects they receive (and anything that they reference).
The above code uses the technique of \emph{phased barrier synchronisation} [8,21,37]. Reader and writer processes share access to the \texttt{hold} field inside the channel. That access is controlled through phases divided by the barriers. In the first phase, only the writer process may write to \texttt{hold}. In the second, only the readers may read. Then, it’s back to phase one. No locks are needed.
Most of the work is done by the first barrier, which cannot complete until all the readers and writer assemble. If this barrier were replaced by an \emph{alting} one, that could be used to enable external choice for all readers and the writer.
Everyone is always committed to the second barrier, which cannot therefore stick. It’s only purpose is to prevent the writer exiting, coming back and overwriting \texttt{hold} before all the readers have taken the broadcast. If the first barrier were replaced by an \texttt{AltingBarrier}, the second could remain as this (faster) \texttt{Barrier}.
However, other optimisations are possible – for example, by the readers decrementing a reader-done count (either atomically, using the new Java 1.5 concurrency utilities, or with a standard monitor lock) and with the last reader resetting the count and releasing the writer (waiting, perhaps, on a 2-way Barrier).
6.2. Java 1.5 Generics
Java 1.5 (also known as Java 5) was a major release that introduced many new features. The three main additions pertinent to JCSP are generics, autoboxing, and the new java.util.concurrent package (and its subpackages).
Generics in Java are a weak form of generic typing. Their primary use is to enhance semantic clarity and eliminate some explicit type casting (whilst maintaining type safety). They have been particularly successful in the revised collection classes.
Generics can be used to type more strongly JCSP channels (and avoid the cast usually needed on the return Object from a read/startRead() method). It would make the type of the channel explicit and enforced by the compiler. Generics require a Java compiler of version 1.5 or later, but they can be compiled into earlier bytecode versions executable by Java 1.3.
6.3. Java 1.5 Autoboxing
Autoboxing is the term for the automatic conversion from primitive types (such as int or double) into their class equivalents (Integer and Double respectively). Particularly when combined with generics, this allows primitive types directly to be used for communicating with generic processes through object-carrying channels. For example, if both autoboxing and generics are used in future versions of JCSP, the following codes would be legal. First, we need a generic channel:
One2OneChannel<Double> c = Channel.<Double>one2one (new Buffer<Double> (10));
Then, a writing process could execute:
out.write (6.7);
where out is the output-end of the above channel (i.e. c.out()). A reading process could execute:
double d = in.read ();
where in is the input-end of the above channel (i.e. c.in()). Note the lack of any casts in the above codes.
Like generics, autoboxing requires a 1.5 compiler but can be compiled to be executable by earlier versions, such as 1.3. This makes generics and autoboxing a potential candidate for inclusion in JCSP that would still allow Java 1.3 compatibility to be maintained – although it would mean that JCSP developers would need a Java 1.5 compiler.
6.4. Java 1.5 New Concurrency Utilities
The java.util.concurrent package contains new concurrency classes. Some classes complement JCSP well: the CopyOnWriteArrayList and CopyOnWriteArraySet classes can be safely shared between processes to increase efficiency.
Some classes have close similarity to certain JCSP primitives. CyclicBarrier is one such class, implementing a barrier (but with a useful twist in its tail). However, it does not support dynamic enrolment and resignation, nor any form of use in anything resembling external choice. Its support for the thread interruption features of Java makes it, arguably, more complex to use.
BlockingQueue looks similar to a FIFO-buffered channel, with Exchanger similar to an unbuffered channel. However, they are not direct replacements since neither class supports external choice.
The atomic classes (in java.util.concurrent.atomic) are tools on which JCSP primitives might profitably be built. This is an avenue for future work.
6.5. Networking
Consideration must also be taken as to how the new features in the core can be implemented into JCSP Network Edition. One of the strengths provided in JCSP is the transparency (to the process) of whether a channel is networked or local. If (generic) typed channels are to be implemented, then a method of typing network channels must also be available. This brings with it certain difficulties. Guarantees between two nodes must be made to ensure that the networked channel sends and receives the expected object type. However, of more importance at the moment is the implementation of networked barriers, and also networked alting barriers, to allow the same level of functionality at the network level as there is at the local level. Extended rendezvous and guarded outputs on network channels are also considerations.
If the move to exploit Java 1.5 is made in JCSP, then certain features of Java can be taken advantage of in the network stack to improve resource usage, and possibly performance. Java 1.4 introduced a form of ‘channel’, in its java.nio.channels package, that can be used to have the native system do some of the work for us. These channels can be used for multiplexing. Since they can represent network connections, we may be able to prune the current networking infrastructure of JCSP to reduce the number of processes needed to route things around – saving memory and run-time overheads.
Attribution
The original development of JCSP was done by Paul Austin and Peter Welch. Further contributions came from Neil Fuller, John Foster and David Taylor. The development of JCSP Network Edition was done by Jim Moores, Jo Aldous, Andrew Griffin, Daniel Evans and Peter Welch. The implementation of poison (and proof thereof) was done by Bernhard Sputh and Alastair Allen. Alting barriers were designed and implemented by Peter Welch. The addition of extended rendezvous, and the merging of all these strands was done by Neil Brown, Peter Welch and Kevin Chalmers.
The authors remain in debt to the CPA/WoTUG community for continual encouragement, feedback and criticism throughout this period. We apologise unreservedly to any individuals not named above, who have nevertheless made direct technical inputs to JCSP.
References
|
{"Source-Url": "https://kar.kent.ac.uk/12959/1/Integrating_and_Extending_JCSP.pdf", "len_cl100k_base": 12803, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 59515, "total-output-tokens": 16565, "length": "2e13", "weborganizer": {"__label__adult": 0.0002682209014892578, "__label__art_design": 0.00022327899932861328, "__label__crime_law": 0.00021088123321533203, "__label__education_jobs": 0.0003745555877685547, "__label__entertainment": 5.054473876953125e-05, "__label__fashion_beauty": 9.500980377197266e-05, "__label__finance_business": 0.00014066696166992188, "__label__food_dining": 0.0002658367156982422, "__label__games": 0.00042629241943359375, "__label__hardware": 0.0005850791931152344, "__label__health": 0.00029277801513671875, "__label__history": 0.00017976760864257812, "__label__home_hobbies": 6.252527236938477e-05, "__label__industrial": 0.0002608299255371094, "__label__literature": 0.00017535686492919922, "__label__politics": 0.00019097328186035156, "__label__religion": 0.0003688335418701172, "__label__science_tech": 0.007572174072265625, "__label__social_life": 6.920099258422852e-05, "__label__software": 0.00534820556640625, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00025200843811035156, "__label__transportation": 0.0003604888916015625, "__label__travel": 0.00017368793487548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67873, 0.018]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67873, 0.36973]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67873, 0.90311]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3327, false], [3327, 7236, null], [7236, 10706, null], [10706, 13305, null], [13305, 17282, null], [17282, 19992, null], [19992, 22361, null], [22361, 24171, null], [24171, 28230, null], [28230, 31750, null], [31750, 33490, null], [33490, 36265, null], [36265, 39181, null], [39181, 42958, null], [42958, 46597, null], [46597, 48358, null], [48358, 51807, null], [51807, 54986, null], [54986, 58020, null], [58020, 61706, null], [61706, 66546, null], [66546, 67873, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3327, true], [3327, 7236, null], [7236, 10706, null], [10706, 13305, null], [13305, 17282, null], [17282, 19992, null], [19992, 22361, null], [22361, 24171, null], [24171, 28230, null], [28230, 31750, null], [31750, 33490, null], [33490, 36265, null], [36265, 39181, null], [39181, 42958, null], [42958, 46597, null], [46597, 48358, null], [48358, 51807, null], [51807, 54986, null], [54986, 58020, null], [58020, 61706, null], [61706, 66546, null], [66546, 67873, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67873, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67873, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3327, 2], [3327, 7236, 3], [7236, 10706, 4], [10706, 13305, 5], [13305, 17282, 6], [17282, 19992, 7], [19992, 22361, 8], [22361, 24171, 9], [24171, 28230, 10], [28230, 31750, 11], [31750, 33490, 12], [33490, 36265, 13], [36265, 39181, 14], [39181, 42958, 15], [42958, 46597, 16], [46597, 48358, 17], [48358, 51807, 18], [51807, 54986, 19], [54986, 58020, 20], [58020, 61706, 21], [61706, 66546, 22], [66546, 67873, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67873, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
e5dff539e0c5675ca2560b4095d676b22bcb3f6d
|
Geppetto: a reusable modular open platform for exploring neuroscience data and models
Matteo Cantarelli¹,²,³, Boris Marin⁴,⁵, Adrian Quintana²,³,⁵, Matt Earnshaw³, Robert Court⁶, Padraig Gleeson³, Salvador Dura-Bernal⁷, R. Angus Silver³ and Giovanni Idili¹,²
¹OpenWorm Foundation, USA
²MetaCell Limited, UK
³Department of Neuroscience, Physiology and Pharmacology, University College London, UK
⁴Departamento de Fisica, Faculdade de Filosofia, Ciencias e Letras de Ribeirao Preto, Universidade de Sao Paulo, Brazil
⁵EyeSeeTea Limited, UK
⁶Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK
⁷Department of Physiology and Pharmacology, SUNY Downstate, Brooklyn, NY, USA
Geppetto is an open-source platform that provides generic middleware infrastructure for building both online and desktop tools for visualizing neuroscience models and data and managing simulations. Geppetto underpins a number of neuroscience applications, including Open Source Brain (OSB), Virtual Fly Brain (VFB), NEURON-UI and NetPyNE-UI. OSB is used by researchers to create and visualize computational neuroscience models described in NeuroML and simulate them through the browser. VFB is the reference hub for *Drosophila melanogaster* neural anatomy and imaging data including neuropil, segmented neurons, microscopy stacks and gene expression pattern data. Geppetto is also being used to build a new user interface for NEURON, a widely used neuronal simulation environment, and for NetPyNE, a Python package for network modelling using NEURON. Geppetto defines domain agnostic abstractions used by all these applications to represent their models and data and offers a set of modules and components to integrate, visualize and control simulations in a highly accessible way. The platform comprises a backend which can connect to external data sources, model repositories and simulators together with a highly customizable frontend.
This article is part of a discussion meeting issue 'Connectome to behaviour: modelling *C. elegans* at cellular resolution'.
1. Introduction
Investigations of fundamental questions in neuroscience, such as the mechanistic basis of behaviour and cognition, generate large volumes of experimental data as well as complex computational models spanning different levels of biological detail. These push the neuroscience applications available to researchers to their limits. Visualizing and managing the heterogeneity of neuroscience data and models in a way that is accessible and usable for both experimentalists and modellers is crucial for driving the field forward. For example, it has been challenging to visualize the data and models required to link the dynamics of the nervous system of *Caenorhabditis elegans* to its behaviour [1], or to understand how the sleep regulatory circuit in *Drosophila melanogaster* is affected by the surrounding environment [2].
In neuroscience, visualization and simulation tools exist for many of the levels of detail involved [3–7], but it is often far from trivial to use them in concert [8]. One popular approach to solving this issue involves using general purpose programming languages such as Python [9–11]. This approach enables the rapid development of toolchains to solve a specific visualization and integration problem, gluing together multiple libraries and tools [12]. The problem with this approach is that these toolchains are usually developed for a specific use case, e.g. processing data from a specific source. Over time, as the application is modified to solve different problems (e.g. deal with a new model or with a new type of visualization), the specificity becomes an obstacle and the codebase becomes a series of ad hoc extensions that are difficult to maintain [13]. An even greater problem comes from the fact that these tools, and even more so their combination, are rather inaccessible to many researchers. Such technological barriers have had a remarkable effect in the neuroscience field as a whole, resulting in modellers and experimentalists working as two different communities separated by a technological divide. This has resulted in computational models that are poorly validated and has left model-generated hypotheses unexplored.
Data and models come in many different types, which are subject to change as the field evolves. Handling such heterogeneities constitutes a significant challenge for neuroscience applications, given that not all of the formats that will be required to answer novel scientific questions will be known at design time. Standard neuroscience formats that have emerged to date include NeuroML [14,15] for computational neuroscience and Neurodata Without Borders [16] for experimental data. Dealing with an extensible set of formats in a more generic yet customizable way requires decoupling the software infrastructure from these domain-specific representations. Designing such system is not trivial considering that both experimental and computational data and models each come with their own set of challenges. The sheer size of experimental datasets, particularly those arising from connectomics and imaging, require specific visualization capabilities and optimizations when handling them. Computational models need to be instantiated within an application to let users interact with their state variables and parameters. Different numerical solvers may be required for these models to be simulated, but the user will not necessarily want to be exposed to the complexity of the software solution and low-level libraries involved [17]. In addition, as the biological detail and scale of simulations increase, transparent access to high-performance computing infrastructures [18] will be required. Data and models are also likely to be stored in repositories and databases using disparate technologies, which poses yet another challenge for applications.
To address the challenges posed by heterogeneous data and models, as well as bridging the divide between users with different fields of expertise, we have developed Geppetto, an open source, modular middleware platform that can be used to build different neuroscience applications. In order to process diverse types of data and models in a reusable way, the software infrastructure is decoupled from domain data and model specification. This decoupling is achieved through the Geppetto Model Abstraction, designed to represent the underlying experimental and computational data and models in a standard way, via reusable modules. Geppetto is also optimized for coping with large amounts of data, through automatic compression and loading on demand, and is able to run simulations on remote supercomputers. To improve accessibility, Geppetto facilitates building novel interfaces by hiding the underlying technologies and by providing prebuilt user-friendly user interface (UI) components. By abstracting and integrating experimental data, computational models and simulators, it is hoped that Geppetto will enable the building of neuroscience applications that can bring together theorists, modellers and experimentalists to formulate and answer increasingly challenging scientific questions related to brain function.
2. Methods
Geppetto is a modular, extensible open-source platform based on a client–server architecture (figure 1) that provides a framework for building neuroscience applications for visualization of data, models and for controlling simulations. The Geppetto backend architecture defines a set of abstract services for which specific implementations can be provided for different domains. The Geppetto frontend provides visualization capabilities that encompass a wide range of what is typically needed for neuroscience data visualization, be it experimental data or data resulting from simulations. The Geppetto frontend is based on a typical modern web stack based on JavaScript and React [19], making use of npm [20] to manage dependencies and webpack [21] to package the code into a browser-ready application.
The Geppetto Model Abstraction (figure 1, orange boxes) enables the decoupling of domain-specific modelling formats from the visualization components, by providing a meta-model that can be used to represent them in a declarative way. To this end, it defines a type system based on core concepts from Object-Oriented Programming: Variables, Types and Values. By supporting Type inheritance (any Geppetto Type can extend another) and composition (Geppetto’s CompositeType can contain Variables of other Types), the Geppetto Model Abstraction makes it possible to represent hierarchical structures of data and models. Geppetto uses the eclipse modelling framework (EMF) [22] to specify its models’ abstractions. The EMF schema is then used to programmatically generate an API for the Geppetto Model Abstraction for each one of the supported (user domain) languages [23,24]. Developers can build their own custom Types using this API, and use them in combination with the ones provided in the Geppetto Common Library (e.g. State Variable, Parameter, etc.). Any model created using the Geppetto Model Abstraction takes the name of a Geppetto Model. Once a domain-specific model is described in terms of the Geppetto Model Abstraction (e.g. by defining a custom Type), the entire platform becomes capable of treating its constituent elements appropriately. It is important to note that in Geppetto, Types are defined using a domain agnostic meta-model: while an application could, for example, create a Library of Types that represent computational models, another application might build one whose Types represent sets of microscopy images. Inside a Geppetto Model, developers can also specify the Data Source services used to fetch data from remote repositories, along with the Queries available to interrogate them. The Geppetto Model Abstraction also defines ImportTypes which can hold references to data and models existing on the backend that have not yet been loaded. Sending ImportTypes to the client, that will be fully loaded upon a request triggered by the user’s actions, is what enables Geppetto to load data on demand (i.e. lazy loading).
The entry point for a Geppetto application is the Geppetto Project. Each Geppetto Project holds a reference to a single Geppetto Model and in addition stores the current state of the application (e.g. which components are open along with their content and
The needs of the specific application will determine the most suitable backend to use, with the Java one currently targeting robust backend model and data sources to be accessed by users through browser-based frontend components. Black blocks in the figure are Geppetto Extensions, used by applications built on top of the Geppetto platform. The Geppetto frontend (shades of blue) is shown containing a diverse set of visualization components. Communication between the frontend and backend happens via Websockets and a REST-API layer (grey block). The Geppetto backend (light purple block) orchestrates the various services available in a given Geppetto application, including specific Model Interpreters (dark purple blocks), external Simulators (cream blocks), Data Managers (green) and Data Sources (pink).
The Geppetto backend has a modular architecture that defines multiple service abstractions (figure 1, dashed lines) designed to perform different operations. The specific implementations of these services live in separate modules that can be optionally used by the different applications. For instance, Virtual Fly Brain (VFB) uses the OBJ and SWC [25] Model Interpreters, while OSB uses the one for NeuroML (figures 2–4). New modules that implement these service abstractions can be contributed to expand Geppetto’s capabilities. The Geppetto backend is responsible for loading in memory Geppetto Projects and for delegating the user actions that require server-side operations to the appropriate services, as specified in the Geppetto Model. In this regard, the main role of the Geppetto backend is to orchestrate the interactions of all services available in a particular application. A Geppetto backend implementation exists for both Java (the reference, fully featured, one) and Python. Different application servers can be used to host the backend including Virgo [27] for Java and Django [28] or Jupyter [29] for Python. The needs of the specific application will determine the most suitable backend to use, with the Java one currently targeting robust client–server applications aimed at a multi-user deployment (e.g. OSB, VFB) and the Python one also useful for lightweight local deployments aimed at a single user (e.g. NEURON-UI, NetPyNE-UI).
A central abstract service defined in the Geppetto backend is the Model Interpreter. Specific Model Interpreter implementations are used to let Geppetto essentially ‘understand’ a given format representing concepts in the user’s original domain—i.e. they allow building instances of the Geppetto Model Abstraction from descriptions in the users’ domain language. Model Interpreters for popular neuroscience formats such as LEMS [15], NeuroML [14] and NetPyNE [6] are already available.
The abstract Simulator service is designed to wrap and control simulators external to Geppetto. The Geppetto backend orchestrates the interactions between Model Interpreter and Simulator services, so that models can be loaded, converted and simulated as result of user operations. Implementations of the Simulator service can wrap simulators as external processes or as remote ones running on external servers (e.g. the Neuroscience Gateway supercomputing facilities [50]). A number of computational neuroscience simulators such as NEURON [3] and NetPyNE [6] have already been wrapped and are available for reuse. Following this architecture, new simulators can be integrated into Geppetto with relative ease.
**Figure 1.** Geppetto architecture. Graphical representation of the components of Geppetto illustrating how the Geppetto Model Abstraction (orange blocks) allows backend model and data sources to be accessed by users through browser-based frontend components. Black blocks in the figure are Geppetto Extensions, used by applications built on top of the Geppetto platform. The Geppetto frontend (shades of blue) is shown containing a diverse set of visualization components. Communication between the frontend and backend happens via Websockets and a REST-API layer (grey block). The Geppetto backend (light purple block) orchestrates the various services available in a given Geppetto application, including specific Model Interpreters (dark purple blocks), external Simulators (cream blocks), Data Managers (green) and Data Sources (pink).
For scenarios where user authentication is required and user data need to be persisted, the Data Manager service can be used by developers to configure the backend to enable authentication and database persistence of the Geppetto Projects and simulation results.
The Geppetto frontend is responsible for presenting the models and data to the user and for allowing them to interact with the application and its workflows. The Geppetto frontend offers a set of controls and components (figure 1) to build the UI of Geppetto-based applications. While controls (e.g. buttons, drop-downs, dialogs, etc.) are generic and data agnostic building blocks, components are more complex constructs that can be used to display data (e.g. three dimensions (3D), time series, connectivity, MRI, big images, stack, etc.) or to enable specific workflows (e.g. Control Panel, Search, Query, etc.). Components are built using various lower-level JavaScript open-source libraries (e.g. [34–37]) and are designed to integrate with the Geppetto Model using a specific API. Any component can be optionally created inside a draggable dialogue window to facilitate data presentation. Components inside these windows are referred to in Geppetto as Widgets.
Geppetto Extensions let developers decide what controls and components they need for their specific application, control the layout and look and feel and also create additional domain-specific custom components (Extensions are represented by the black boxes in figure 1). Geppetto only loads the UI components specified in the Geppetto Extension of a given application. A default Extension is provided as an example and is accessible via https://live.geppetto.org. By loading the components asynchronously only once the interface needs them, Geppetto optimizes the loading times of the application at start-up.
Upon receiving a Geppetto Model from the backend, when loading a given Geppetto Project, the frontend will instantiate it. Instantiated Geppetto Types are mapped to JavaScript objects (e.g. a population of one cell Type would become a JavaScript array containing Instances of that Type) and augmented with specific Capabilities which confer on them the ability to be accessed via a specific API. So, for instance, if a Model Interpreter in the backend defined a custom Type including a State Variable, upon instantiation in the frontend, this would become a JavaScript object with an injected StateVariableCapability containing methods specific for state variables, e.g. getUnit(), getInitialValue(), etc. This has the advantage of giving developers the ability to build UI components that can interact with the Geppetto Model in an object-oriented way, and allow all the user operations to be fully scriptable, reproducible and testable (e.g. a UI button designed to plot a state variable would call Plot.plotData(myStateVariable.getTimeSeries()). The same principles apply when a custom Type defining a cell morphology (Values like Sphere and Cylinder are available to this end in the Geppetto Model Abstraction) is sent to the frontend and passed to the 3D Canvas component using its API for display. Geppetto has the ability to either visualize a single instance of a Type (a cell morphology in this example) or an entire population based on it, depending on whether the Model Interpreter responsible for the creation of the model instantiated the Type only once or multiple times through an ArrayType. In some cases, as with the Stack Viewer which connects directly to an IIP3D Server [38], it might be preferable for the UI components to read directly a specific format without requiring a mapping to the Geppetto Model, which is also permitted by the architecture.
3. Results
In this section, we present four examples of neuroscience applications that have been built using Geppetto. Thanks to Geppetto’s open-source model, many of the features and components described in the Methods section have evolved in concert with the development of these applications in order to satisfy their requirements. Each of the applications have their own Extension, where their custom functionality is specified, and a specific deployment configuration. While the first two, OSB and VFB, use the Java backend and are deployed on public web servers where multiple users can...
access them simultaneously, the last two, NEURON-UI and NetPyNE-UI, use a Python backend and are designed to be local deployments aimed at a single user, similar to traditional client applications. Geppetto is currently being used to build a total of seven neuroscience applications [31,39–44].
Figure 3. (a) Screenshot of a reduced thalamocortical network model [26] on OSB showing analysis and simulation widgets provided by Geppetto and the Geppetto frontend OSB extension. Centre of screen shows 3D rendering of the 12 populations of pyramidal cells and interneurons. Widgets shown are (clockwise from top-left): plot showing recorded membrane potentials from three cells of a previously run experiment; run dialogue for selecting simulators and running experiments; widget showing ion channels and their densities for a single-cell model; chord diagram showing connectivity between populations. (b) Visualization of the neuronal network model of C. elegans being developed by the OpenWorm project. Centre of screen shows 302 neurons (red: interneurons; pink: sensory; purple: motor neurons) and four quadrants of body wall muscles (green) located away from the body for clarity. Connectivity widget on lower right shows chemical synapses between individual neurons/muscles. Inset on lower left illustrates interactive exploration of network; selecting a single motor neuron (RMED in head) highlights the neurons connected to it, along with five muscles in two of the ventral quadrants.
4. Open Source Brain
OSB (http://www.opensourcebrain.org) is a platform for visualizing, simulating, disseminating and collaboratively developing standardized, biophysically detailed models of neurons and circuits [45]. OSB contains a range of published neuronal and circuit models from multiple brain regions including the neocortex, cerebellum and hippocampus as well as invertebrate neuron models. Model components (e.g. point neuron or morphologically detailed cell models including membrane conductances, synapses, 3D network structures) are contained in user-created projects, each linked to a public code sharing repository (normally hosted on GitHub) that holds the model source code, specified in NeuroML, a widely used model description format for computational neuroscience [14,15]. OSB provides an integrated browser-based workspace that captures many of the infrastructural demands of projects in computational neuroscience, and allows users to interact with the underlying neuronal models through a graphical interface, without requiring programming knowledge or installing and configuring simulators.
Figure 2 shows how Geppetto is configured for OSB. Many aspects of Geppetto's functionality have been developed to provide the core functionality for OSB. The NeuroML Model Interpreter and the LEMS Conversion services were contributed to Geppetto to deal with the NeuroML and LEMS formats, reusing previously developed libraries [15]. The NeuroML Model Interpreter allows standardized model descriptions to be loaded into the OSB Geppetto deployment, providing automatic 3D visualization of morphologies and internal structure of models, such as state variables and parameters (figure 5a) and connectivity within the network (figure 5b). Structured metadata in the NeuroML files can be extracted, as well as the underlying mathematical expressions of dynamical components in the model (e.g. kinetics of membrane conductances). These data are made available in an accessible format to the user through a custom Extension to the Geppetto frontend.
This OSB custom extension to Geppetto adds shortcuts and menu options for interacting with models, running simulations and visualizing their results. A summary of information extracted from the NeuroML model can be accessed through a ‘Model Description’ widget, which includes links to the source file and original data sources, giving model provenance. This widget also provides easy access to neuronal model-specific functionality, such as plotting rates of activation and inactivation for ion channels and overlaying locations and densities of active conductances on neuronal morphologies (bottom right, figure 5a). A shortcut to the Connectivity Widget allows the user to see synaptic connectivity of models at a glance: as a chord diagram (bottom left, figure 5a), connectivity matrix with weights (bottom right, figure 5b), force-directed graph or hive plot. Key parameters present on any given model are thus automatically exposed in a format familiar to neuroscientists.
The simulator agnostic NeuroML format can be converted to simulator-specific formats such as NEURON [3] using a suite of existing converters that implement the Geppetto conversion service interface (figure 2). Geppetto’s external simulator abstraction allows OSB to transparently interface with these converters and their associated simulators, allowing models to be simulated through a simple interface. Geppetto can either dispatch simulator jobs to the Neuroscience Gateway [30], a high-performance computing facility or run them on OSB servers. The extension provides assistance for simulation workflows; basic protocols can be defined that create batched experiments with a given range of parameters or the user can record all membrane potentials with a single click. Upon completion, the data generated are

sent to the browser for visualization using a Geppetto’s plotting widget (top left, figure 5a), or recorded membrane potentials or calcium concentrations can be visualized by pseudocolouring the morphologies to show changes over the course of a simulation, and the simulation can be replayed at various speeds. Alternatively, the raw results can be downloaded or automatically uploaded to Dropbox via Geppetto’s dropbox interface functionality. Experiments run asynchronously on remote servers, so users do not need to keep their browser open.
The configurable functionality of Geppetto middleware enables OSB to make models accessible, opening them up to critical scientific scrutiny by a wide range of neuroscientists. This supports the process of ongoing model evolution, which is aided by OSB’s deep link to GitHub [46], preventing model development from becoming arrested at the point of publication. OSB therefore provides a resource of robust models that can function as best practice examples for model sharing for the neuroscience community.
In addition to this research aspect, OSB also leverages Geppetto’s tutorial component to provide interactive computational neuroscience tutorials aimed at students. These tutorials allow users to run virtual experiments and protocols through an easy-to-use web interface, allowing basic concepts in neurophysiology and computational neuroscience to be taught without installing simulators or writing code.
5. Virtual Fly Brain
VFB (http://virtualflybrain.org) is a hub for *D. melanogaster* neuroscience research which was born from the need to make the newly standardized fly neuroanatomy available to the public [47–49]. Along with extensive curation of the literature in collaboration with FlyBase [50], VFB v1 allowed users to explore labelled confocal immunofluorescent slices of the adult fly brain across the Internet. The user could step through the brain and identify anatomy by hovering over it. Later this expanded to include expression, transgene and single neuron image data published by multiple laboratories that was aligned to the same template brain enabling any of the 40 000 images to be overlayed. While most researchers were used to viewing slices through the brain, with more single neurons appearing as tiny points in cross-section, interpreting the morphology was increasingly difficult without a 3D representation.
VFB v2 was designed to provide access to all the complex queries and data an expert might require within an interface a novice can easily navigate. Geppetto’s ability to load data on demand and to optimize the visualization of neurons as tubes or traced lines was essential for VFB to efficiently display larger amounts of imaging data on the screen. The ability to query third-party RESTful APIs through the Data Source services allowed VFB to fetch remote data running complex queries (figure 5b) involving multiple configurable Data Sources (figure 1). VFB currently pulls data via an ontology reasoner (OWL-ELK [33]) as well as a graph database (Neo4j [32]). Geppetto’s Control Panel and Search components were reused and customized within VFB’s Geppetto Extension to show custom fields and to provide autocompletion search results utilizing a (SOLR [51]) indexing server.
6. NEURON-UI and NetPyNE-UI
NEURON is a widely used simulator in the neural multi-scale modelling domain, allowing models to be built that link reaction–diffusion dynamics at the molecular level, to neuronal electrophysiology, up to the large-scale network level [3,6,52,53]. It has thousands of users, a model database [54] with over 600 models, and over 1900 NEURON-based publications. NEURON is being used by major brain research initiatives such as the Human Brain Project and the Allen
Neuron includes a native graphical UI for model construction and control, which while fully functional has limited usability and graphical capabilities and is based on deprecated libraries (Interviews) originally developed in the 1980s.
NetPyNE [56] is a high-level Python interface to NEURON that facilitates the development, simulation and analysis of biologically detailed neuronal networks. It provides a unique high-level declarative language designed to facilitate the definition of data-driven multi-scale models (e.g. a concise set of connectivity rules versus millions of explicit cell-to-cell connections). The user can then easily generate NEURON network instances from these specifications, run efficient simulations (including on high-performance parallel computing resources) and exploit the wide array of built-in analysis functions. Its standardized format—compatible with NeuroML—makes it easier to understand, reproduce and reuse models. NetPyNE is being used to develop models of different brain regions—e.g. thalamus, cortex and hippocampus—and phenomena—e.g. neural coding and brain disorders [6,57].
Geppetto has been used to build UIs for both NEURON and NetPyNE. The two applications, designed to be installed and used locally by a single user, have in common an architecture based on the Geppetto interactive Python backend. This backend is implemented as a Jupyter Notebook [29] extension which provides direct communication with the Python kernel. By defining a set of component extensions, Geppetto’s interactive Python backend makes it possible to synchronize the data model underlying the UI with a custom Python model. This functionality is at the heart of both NEURON-UI and NetPyNE-UI and means any change made to the Python kernel is immediately reflected in the UI and vice versa.
Although NEURON-UI and NetPyNE-UI share the same architecture (figure 4 gives an overview of the Geppetto components used in NetPyNE-UI), they differ in certain aspects. In NEURON-UI, the graphical interface is created using a custom Python API meant to mimic NEURON’s Interviews-based API. The panels, buttons and text boxes in the UI are therefore created from Python and mapped to Geppetto UI components (figure 7a). These components are then connected to the internal Geppetto API to visualize the cells and the networks, run the simulations and plot the results. The idea behind this approach was to retain backward compatibility with the numerous existing NEURON interfaces built with Interviews for various models. Our future aim is to fully map the NEURON API to our NEURON-UI, therefore providing a comprehensive alternative to the traditional UI.
By contrast, in NetPyNE-UI, the UI is defined entirely in JavaScript inside its Geppetto extension. This offers a flexible and intuitive way to create advanced layouts while still enabling each of the elements of the interface to be synchronized with the Python model. The UI splits the workflows in three tabs: network definition, network exploration and network simulation and analysis (figure 7b). From the first tab, it is possible to define—or import via Python—the high-level network parameters and rules that will be used for its generation. In the second and third tabs, Geppetto’s 3D Canvas is used to visualize the instantiated network. The third tab lets the user simulate the instantiated model (this tab is selected in figure 7b). Geppetto allows NetPyNE-UI also to display on the browser a number of plots that are defined in NetPyNE using matplotlib for network analysis and simulation. Both NEURON-UI and NetPyNE-UI can be installed via pip [58] or used inside provided Docker images.
The new Geppetto-based UIs will make NEURON and NetPyNE accessible to a wider range of researchers and students, including those with limited programming experience. This will enable experimentalists to better collaborate with modellers, or to directly reproduce and explore their own experiments via computational simulations.
7. Discussion
We have developed Geppetto, an open-source middleware platform for building accessible neuroscience applications. Geppetto facilitates the development of complex applications by providing a well-tested, reusable set of building blocks to integrate diverse neuroscience data, models and simulators. Geppetto provides a modular frontend, where multiple customizable UI components and Widgets make it possible to visualize and analyze models and data, as well as a backend capable of connecting to multiple data sources and lower-level, domain-specific descriptions and simulators. This was made possible by designing the Geppetto Model Abstraction that can be used to represent a variety of neuroscience domain models, linked to a modular web-based architecture engineered using various open-source libraries. Geppetto has been used as the basis of a number of online and desktop applications in neuroscience: OSB, VFB, NEURON-UI and NetPyNE-UI described here, as well as Patient H.M. [39], WormSim [41] and SciDash [44].
Neuroscience applications are typically developed independently, to address a specific requirement. This leads to considerable redundancy with the same functionality being redesigned and implemented over and over again [59–65]. This approach is only justifiable when the shared set of features is negligible. In this paper, we have shown that even for applications whose requirements were specified independently and had minimal overlap, there can be a significant degree of shared infrastructure. Geppetto proposes an alternative approach by exploiting this fact, allowing neuroscience applications to be built from reusable modules—as illustrated by the overlapping blocks in figures 2–4. This strategy fits naturally into the open-source model—components and modules are more likely to be reusable compared to monoliths—making Geppetto a flexible and extensible solution for multiple applications in neuroscience.
As middleware that factors out commonalities between different domains, Geppetto’s modular structure enables a high level of reuse, allowing developers to skip to writing only code specific to their neuroscience application resulting in a considerable saving of time. As with all software platforms, Geppetto has its own learning curve required for developers to understand its architecture and become familiar with its components. While at first this initial investment might be seen as a complication compared to the apparent ease of starting from a blank slate, developers associated with the applications described above, with no previous experience on Geppetto, have found it only takes from one to four weeks to become productive. This time investment is outweighed by the subsequent savings made in avoiding common pitfalls, replicating solutions to
common problems and rewriting entire software components and workflows. There is also a significant advantage in interacting with the active community of Geppetto developers, who can assist with any queries. The net time saving compared to an approach that starts from scratch is difficult to estimate but is likely to range from six months to five years\(^2\) depending on the targeted scope—the more the features required that overlap with Geppetto’s the bigger the savings—and on the size and experience of the team of developers involved. Moreover, extensive sharing of modules between applications results in them being thoroughly tested [66], while having a shared infrastructure that undergoes regular release cycles ensures maintenance is less burdensome for each specific application. Furthermore, the distributed nature of the Geppetto code base and the fact that updates are made independently of any specific project ultimately increases the longevity of any application built with this platform.
The diversity of applications that have been built so far with Geppetto illustrates the flexibility of its model abstraction capabilities, which can encompass different domains, data and scientific modelling formalisms. Also, as the
Figure 7. (a) Screenshot of NEURON-UI while in edit mode, a simplified cell builder (bottom left) lets the user edit any selected section (in yellow) while the Run control panel (right) is used to control the simulation. (b) NetPyNE-UI showing the result of a simulation of a large-scale M1 microcircuit model with widgets showing a raster plot (top left), individual cell membrane potentials (bottom left), population spiking statistics (middle) and the power spectral densities for two populations (right).
platform keeps evolving, new solutions added for a specific application become immediately available to all the other applications. Examples of this include many of the features contributed by OSB being reused by multiple applications (e.g. Control Panel, Search Bar or the Experiments table); the SWC [25] Model Interpreter contributed by VFB, which is reused in OSB; and the 3D Canvas, originally built for the first deployment of the platform and reused by every other application to date. Geppetto combines a model-driven design with a service-oriented architecture to enable reuse across multiple applications. Its modularity, a centrepiece of both the backend and the frontend, is obtained by engineering together a unique set of technologies [19,21,22,67,68] to provide novel functionality. By allowing different neuroscience applications to use the same technologies, Geppetto provides well-tested solutions that bring closer together otherwise disjoint research groups—both computational and experimental, thereby fostering collaboration.
The Geppetto applications described in the Results section are in active development. Some of the planned and ongoing projects include: extending OSB to bring together models and the experimental data used to build and test them, by adding standardized data interpreters (e.g. v. 2 of the Neurodata Without Borders format); extending VFB to cover all stages/regions of the fly CNS, incorporating synapse level connectomics data with the extensive light level image and literature knowledge; releasing a new version of WormSim, currently being developed within the Open-Worm project [1] that will integrate the Sibernetic [69] fluid dynamics simulator (see [70]) with the NeuroML-based nervous system model (see [71]). The latter will be the first instance of a Geppetto application providing a non-computational neuroscience-specific numerical engine, used for fluid dynamics simulations (figure 8).
Thanks to its open, modular, web-based architecture, Geppetto ultimately enables the engineering of a new breed of neuroscience applications that can be used in a collaborative way by theoreticians, modellers and experimentalists to formulate new scientific hypotheses, build and validate new models, and help gain insights into the most pressing questions in neuroscience.
Data accessibility. Geppetto is open source (http://git.geppetto.org) and released under the MIT licence. Documentation is available at http://docs.geppetto.org. A live demo application to showcase the latest release of Geppetto (0.4.0 at the time of writing, new versions are released monthly) is available at http://live.geppetto.org. Docker images are available for Geppetto at http://docker.geppetto.org, which simplify creation of a local instance of the application with all required libraries preconfigured. Integration tests for the full stack and for the UI are available for all the main features. These tests are automatically executed after every commit.
Competing interests. MetaCell Ltd. was contracted by UCL to develop some of the features of Open Source Brain.
MetaCell Ltd. was contracted by EMBL-EBI to develop some of the features of Virtual Fly Brain. MetaCell LLC was contracted by State University of New York to develop NEURON-UI and NetPyNE-UI. M.C., G.I. declare financial interest in MetaCell Ltd, LLC. The other authors declare no competing financial interests.
Funding. The OSB initiative was funded by the Wellcome Trust (086699, 101445, 095667), R.A.S. is in receipt of a WT PRF (203048) and an ERC advanced grant (294667). In addition, the infrastructure to enable integration of OSB and the Neuroscience Gateway was funded by the BBSC-NSF/BIO program (BB/N005236/1 and NSF #1458495) and NSF #1458495. The VFB project is supported by a grant from the Wellcome Trust: Virtual Fly Brain (208379/Z/17/Z) (October 2017–September 2021) after Wellcome Trust: Virtual Fly Brain: a global informatics hub for Drosophila neurobiology (WT105025MA) (October 2014–September 2017). VFB was previously supported by a research award from the BBSC to Douglas Armstrong and Michael Ashburner (Cambridge BB/G02233X/1; Edinburgh BB/G02247X/1). A UK e-Science Theme award to Douglas Armstrong helped establish the VFB project. B.M. is funded by grant 2017/04748-0, São Paulo Research Foundation (FAPESP). NEURON-UI and NetPyNE-UI were funded by NIH U01EB017695, NIH R01MH086638, NIH R01EB022903 and DOH01-C32250GG-345001.
Acknowledgements. We would like to thank all Geppetto contributors (http://contributors.geppetto.org/) and in particular Jesus Martinez who implemented many Geppetto features. We are grateful to the OpenWorm Foundation for hosting the Geppetto repositories and community and in particular to Stephen Larson for stimulating discussions throughout the development of Geppetto, for his valuable input and continued support. We thank William Lytton, Robert McDougal and Michael Hines for their support while developing NEURON-UI. We would also like to thank Carlo Collodi for inspiring the name of the platform with his novel Pinocchio.
Endnotes
1Depending on the background and level of experience of the developer.
2Estimate based on the actual time that was spent designing and implementing various reusable components, e.g. 3D Canvas 6 months, MRI Viewer 3 months, Plotting widget 6 months, Connectivity Widget 5 months, Stack Viewer 6 months, Control Panel 3 months, Geppetto Model Abstraction 9 months, etc. Building of the infrastructure in its current form took 3 years. All these figures consider 1 Senior Development Engineer FTE.
References
33. Developers N. 2012 Neo4 J. Graph NoSQL Database (online).
35. three.js—javascript 3D library. See https://threejs.org/ (accessed on 10 March 2018).
Endnotes
1Depending on the background and level of experience of the developer.
2Estimate based on the actual time that was spent designing and implementing various reusable components, e.g. 3D Canvas 6 months, MRI Viewer 3 months, Plotting widget 6 months, Connectivity Widget 5 months, Stack Viewer 6 months, Control Panel 3 months, Geppetto Model Abstraction 9 months, etc. Building of the infrastructure in its current form took 3 years. All these figures consider 1 Senior Development Engineer FTE.
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/75208111/Geppetto_a_reusable_modular_open_platform_for_exploring_neuroscience_data_and_models.pdf", "len_cl100k_base": 8660, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 41730, "total-output-tokens": 11770, "length": "2e13", "weborganizer": {"__label__adult": 0.0004830360412597656, "__label__art_design": 0.0005331039428710938, "__label__crime_law": 0.00038361549377441406, "__label__education_jobs": 0.0019207000732421875, "__label__entertainment": 0.00019693374633789065, "__label__fashion_beauty": 0.0002753734588623047, "__label__finance_business": 0.0004839897155761719, "__label__food_dining": 0.0005164146423339844, "__label__games": 0.0009784698486328125, "__label__hardware": 0.0020503997802734375, "__label__health": 0.0031223297119140625, "__label__history": 0.0004498958587646485, "__label__home_hobbies": 0.00018906593322753904, "__label__industrial": 0.0006046295166015625, "__label__literature": 0.00039768218994140625, "__label__politics": 0.0003170967102050781, "__label__religion": 0.000614166259765625, "__label__science_tech": 0.33837890625, "__label__social_life": 0.00017249584197998047, "__label__software": 0.0262451171875, "__label__software_dev": 0.6201171875, "__label__sports_fitness": 0.00057220458984375, "__label__transportation": 0.0007205009460449219, "__label__travel": 0.00026035308837890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49764, 0.04815]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49764, 0.37651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49764, 0.87161]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2938, false], [2938, 10476, null], [10476, 14783, null], [14783, 19097, null], [19097, 20589, null], [20589, 24724, null], [24724, 25776, null], [25776, 28489, null], [28489, 35308, null], [35308, 37061, null], [37061, 41465, null], [41465, 49764, null], [49764, 49764, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2938, true], [2938, 10476, null], [10476, 14783, null], [14783, 19097, null], [19097, 20589, null], [20589, 24724, null], [24724, 25776, null], [25776, 28489, null], [28489, 35308, null], [35308, 37061, null], [37061, 41465, null], [41465, 49764, null], [49764, 49764, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49764, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49764, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2938, 2], [2938, 10476, 3], [10476, 14783, 4], [14783, 19097, 5], [19097, 20589, 6], [20589, 24724, 7], [24724, 25776, 8], [25776, 28489, 9], [28489, 35308, 10], [35308, 37061, 11], [37061, 41465, 12], [41465, 49764, 13], [49764, 49764, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49764, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
a8ac70812be1d23c9f47ebf9b3a605619999497b
|
[REMOVED]
|
{"Source-Url": "http://aidanhogan.com/docs/rdf_explorer_sparql_interface.pdf", "len_cl100k_base": 9728, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 49271, "total-output-tokens": 12526, "length": "2e13", "weborganizer": {"__label__adult": 0.0004529953002929687, "__label__art_design": 0.0016498565673828125, "__label__crime_law": 0.000469207763671875, "__label__education_jobs": 0.005657196044921875, "__label__entertainment": 0.0003807544708251953, "__label__fashion_beauty": 0.00028395652770996094, "__label__finance_business": 0.0007200241088867188, "__label__food_dining": 0.0004477500915527344, "__label__games": 0.0011692047119140625, "__label__hardware": 0.0008282661437988281, "__label__health": 0.0006923675537109375, "__label__history": 0.0009784698486328125, "__label__home_hobbies": 0.0001881122589111328, "__label__industrial": 0.0004968643188476562, "__label__literature": 0.0016508102416992188, "__label__politics": 0.0004088878631591797, "__label__religion": 0.000759124755859375, "__label__science_tech": 0.275390625, "__label__social_life": 0.0003633499145507813, "__label__software": 0.1270751953125, "__label__software_dev": 0.57861328125, "__label__sports_fitness": 0.0002639293670654297, "__label__transportation": 0.0006718635559082031, "__label__travel": 0.00034880638122558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47507, 0.03407]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47507, 0.44257]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47507, 0.8612]], "google_gemma-3-12b-it_contains_pii": [[0, 2891, false], [2891, 6167, null], [6167, 9129, null], [9129, 12134, null], [12134, 15696, null], [15696, 19410, null], [19410, 22845, null], [22845, 24556, null], [24556, 26664, null], [26664, 28848, null], [28848, 30994, null], [30994, 33810, null], [33810, 36339, null], [36339, 39347, null], [39347, 41548, null], [41548, 44775, null], [44775, 47507, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2891, true], [2891, 6167, null], [6167, 9129, null], [9129, 12134, null], [12134, 15696, null], [15696, 19410, null], [19410, 22845, null], [22845, 24556, null], [24556, 26664, null], [26664, 28848, null], [28848, 30994, null], [30994, 33810, null], [33810, 36339, null], [36339, 39347, null], [39347, 41548, null], [41548, 44775, null], [44775, 47507, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47507, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47507, null]], "pdf_page_numbers": [[0, 2891, 1], [2891, 6167, 2], [6167, 9129, 3], [9129, 12134, 4], [12134, 15696, 5], [15696, 19410, 6], [19410, 22845, 7], [22845, 24556, 8], [24556, 26664, 9], [26664, 28848, 10], [28848, 30994, 11], [30994, 33810, 12], [33810, 36339, 13], [36339, 39347, 14], [39347, 41548, 15], [41548, 44775, 16], [44775, 47507, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47507, 0.14667]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
653fc08b79d5150673c995d22e0fc079f1fd6c5e
|
In previous chapters, you’ve learned about several techniques that malware uses to establish context and better understand its current environment. When malware determines that it’s running in an analyst’s lab or in an otherwise hostile environment, it may take evasive measures, such as delaying its execution, creating decoys, or even actively impeding investigation efforts by interfering with the analyst’s tools. This chapter will focus on these and other methods that malware uses to hide from and circumvent analysis tools.
Self-Termination
A simple and effective way in which malware can avoid analysis is self-termination. The malware can simply call Windows API functions such as TerminateProcess or ExitProcess to issue a “kill” command to its own process, like so:
```c
is_vm = enumerate_reg_keys(keys)
if (is_vm)
{
current_process = GetCurrentProcess()
TerminateProcess(current_process, ...)
}
```
This malware pseudocode first calls its own internal enumerate_reg_keys function to enumerate some of the VM-related registry keys discussed in Chapter 4. (The details of the function aren’t shown here.) Next, if is_vm returns true, the malware requests a handle to its own process (GetCurrentProcess) and then terminates itself by calling TerminateProcess. The ExitProcess function can be used in the same way, with a few trivial differences. Sometimes, malware even calls both functions to ensure that it has successfully terminated.
This technique is especially effective against automated sandboxes, which can’t monitor the behavior of a malware sample that has terminated itself. However, a sandbox could flag the function itself or detect that the sample terminated itself too soon. This approach can also be effective against a malware analyst interacting with the sample manually, as the analyst will have to walk backward through the code in a debugger or disassembler to determine how and why the malware terminated itself.
When you’re analyzing a malware sample that’s using this technique, setting a debugger breakpoint on ExitProcess and TerminateProcess may help you catch the malware before it has a chance to kill itself. This will allow you to inspect the call stack and the code leading up to the process termination, and hopefully to identify what caused it. Keep in mind, however, that these API functions might also be called during a crash, so the malware may not be invoking them directly for evasion purposes.
Delayed Execution
Imagine a typical automated malware analysis sandbox environment. This environment will boot up on demand, detonate a malware sample, monitor the malware’s behaviors for a few minutes (depending on how the sandbox is configured), and then shut down. But what if the malware delays its own execution to “time out” the sandbox analysis process? For example, perhaps the malware executes a sleep routine in which it lies dormant for several minutes, outlasting the short life of the sandbox environment. It’s not unheard of for advanced malware to delay its execution for hours or even...
weeks at a time. This is an effective method of evading sandboxes and frustrating malware analysts’ efforts.
**Sleep Function Calls**
Perhaps the most common form of delayed execution is malware simply invoking the `Sleep` function from the Windows API. `Sleep`, as well as its cousin, `SleepEx`, takes a parameter that represents the sleep time in milliseconds. The following assembly code shows a snippet of a malware sample calling the `Sleep` function:
```assembly
push 493E0h ; 5 minutes
call Sleep
```
In this case, the `493E0h` parameter passed to `Sleep` is the time in hexadecimal, representing 300,000 milliseconds, or 5 minutes.
**NOTE**
*For more information on the `Sleep` function and how malware can use it, see Chapter 7.*
To bypass this technique, you could put a breakpoint on `Sleep` and `SleepEx` function calls and then modify the `dwMilliseconds` parameter passed to it. Alternatively, you could `nop` out these `Sleep` instructions or jump over them in a debugger. These aren’t always foolproof solutions, however; advanced malware may calculate the system time before and after the calls to `Sleep` to verify that the `Sleep` function executed correctly! Lastly, many modern sandboxes can intercept calls to `Sleep` and modify them, dramatically lowering the sample’s total sleep time.
**Timeouts**
Malware can take a less traditional route to delay its execution by using Windows utilities, such as `ping.exe`, to cause a *timeout*. This approach often works better than the sleep method, since it’s more difficult for sandboxes to interfere with. Another advantage is that it may confuse the analysis process, as the malware analyst must figure out why the malware sample is invoking a certain application.
In the following code snippet, a malware sample is executing `ping.exe` to ping Google 1,000 times. Depending on the network connection speed, this could create a long delay or even cause the sandbox to time out and stop analysis:
```assembly
push eax ; "ping.exe google.com -n 1000"
push 0;
call CreateProcessA
```
Malware can also call the `timeout.exe` Windows tool, which is typically used in batch scripts to pause command processing, in order to delay execution. Be on the lookout for malware invoking these types of tools. Use code...
analysis and debugging to understand why the malware might be executing this behavior.
**Time and Logic Bombs**
In a *time bomb*, the malware sets a specific time, such as a certain date or time of day, when it will execute. For example, a malware sample may contain embedded code that executes only at 9:00 AM every morning, once every Saturday, or on December 26, 2024, at 5:55 PM. Unless the sandbox or malware analyst manually resets the date or time to trick the malware into running, the sample won’t execute its malicious code.
Similar to a time bomb, in a *logic bomb*, the malware executes after a specific event (such as a certain file deletion or database transaction) has occurred on the host. Logic bombs may be even more effective than time bombs, since they can be very specific to the malware’s operating environment.
The following simplified pseudocode demonstrates a time bomb technique in which the malware sample gets the current system date and compares it to a hardcoded date (in this case, 2024):
```c
--snip--
GetSystemTime(&systemTime)
if (systemTime.wYear <= '2024') {
KillSelf()
}
```
If the malware determines that the current date is 2024 or earlier, it will fail to execute.
Sometimes a sandbox can identify whether malware is using these techniques, but they often fly under the radar. The best way to identify time and logic bombs is code analysis. Inspecting the malware sample in a disassembler or debugger may uncover the time, date, or logic that the malware is looking for. Once you identify this, you can simply set your analysis system time to match it or try to re-create the logic. Alternatively, you could modify the malware’s code in a disassembler or debugger to bypass these checks.
It’s important to note that, besides being used for evasion, time bomb techniques are used to control the malware’s spread. Malware may be programmed to *not* execute after a specific date or time. Advanced malware targeting a specific organization could implement such logic to avoid spreading outside its intended target network.
**Dummy Code and Infinite Loops**
Some malware authors introduce *dummy code* into their malware that loops, possibly infinitely, calling CPU-intensive functions or functions that serve no purpose other than to time out the analysis. The dummy code usually runs once the malware has detected a sandbox or VM environment. The following assembly code example shows what that might look like:
In this basic for loop, the value of ecx is incremented by 1 and then compared to itself. If it’s equal to itself (hint: it will be), the loop repeats. This simple code will stall the malware’s execution indefinitely, or at least until the sandbox terminates or the malware analyst becomes frustrated and kills the process.
Similarly, some malware repeatedly calls Windows API functions to stall analysis. For example, it might call `RegEnumKey` to enumerate the host’s entire registry, which will take a significant amount of time. Alternatively, the malware sample might repeatedly call `LoadLibrary` on nonexistent libraries. While writing this book, I analyzed a Dridex banking trojan sample that executes `GetProcAddress` over *five million times* to resolve addresses of functions it never uses (see Figure 8-1). This stalls analysis, uses up valuable sandbox memory and CPU resources, and sometimes results in a crash.
Dridex has also been known to execute `OutputDebugString` in an infinite loop, which has the same effect as the `GetProcAddress` approach. The `OutputDebugString` function will be discussed in more detail in Chapter 10. You can download the Dridex malware from VirusTotal or MalShare by using the following file hash:
**SHA256:** 34fa8c8e97d69ecd42569b994e1933b451976958e0fb8174d6ca6483c2ae070
Oftentimes, malware attempts to establish persistence on the victim host in order to “survive” system shutdowns and reboots, but persistence is also a great delayed-execution tactic. Malware can configure a persistence mechanism in the form of a scheduled task on the victim host to execute its payload only after a certain event or amount of time. Chapter 15 will discuss several ways to achieve persistence.
Forcing Reboots and Logouts
Forcing a system shutdown, reboot, or logout can be an effective method of evasion, especially in sandboxes. It will promptly halt all analysis efforts, at least until the host is back up. Most modern sandboxes are able to deal with this, however, and if the sandbox senses a shutdown or logout has been issued, it will simply continue analysis after the machine is back up. But this can still negatively affect the malware analysis process. In the case of reboots, for example, artifacts that were once in memory may now be destroyed.
Malware can force a reboot or shutdown by invoking functions such as InitiateShutdown, InitiateSystemShutdown, and InitiateSystemShutdownEx. All three functions operate similarly and take a few key arguments, such as an option specifying whether to shut down or reboot the host, as well as a timeout value representing the duration between the function call and the reboot or shutdown. Another API function that malware might use is ExitWindows (or its sibling, ExitWindowsEx), which adds the option to log out the user, rather than simply rebooting or shutting down the host. Finally, the system can also be shut down using WMI, PowerShell, or the built-in Windows shutdown.exe tool.
Malware often uses this technique after it has established persistence, at which point it forces a reboot and then runs its actual payload. In this way, it successfully evades certain automated analysis sandboxes and confuses (or at least annoys) malware analysts trying to investigate the sample.
Decoys and Noise
Some malware authors take advantage of the fact that sandboxes operate in a predictable way. For example, sandboxes must capture a large amount of data to understand and assess a malware sample’s behaviors, and malware can exploit this by generating lots of noisy or decoy data that can quickly overwhelm a sandbox or hamper analysis. This section covers a few different ways in which malware can do this.
API Hammering
When a sandbox detonates a malware sample, it logs the malware’s behaviors and function calls. API hammering involves calling the same function many times (in some cases, hundreds of thousands of times) to quickly fill up the sandbox logs and flood the analysis environment with useless data. As a result, the sandbox may be unable to successfully analyze the sample due to too much noise and a full log. Furthermore, malware samples using API-hammering techniques will take a lot longer to fully execute in a sandbox since its logging behaviors introduce extra overhead. If the same sample were executed on a normal end-user system, it would execute much more quickly.
Nearly any Windows API function can be abused for this purpose. Two I’ve seen are `printf` (a C function that prints characters to the calling application) and `GetTlsValue`. The malware sample shown in Figure 8-2 called the `GetTlsValue` function over 30,000 times in a row!

The malware families Nymaim and Trickbot both employ API-hammering techniques, as described in blog posts from Joe Security (https://www.joescience.org). At least one Nymaim variant makes over half a million benign Windows API function calls if the sample detects that it’s running in a VM or sandbox environment! As you can imagine, this generates an enormous amount of data in a sandbox log. Some sandboxes, unable to handle that volume of data, would likely terminate the analysis early.
Many modern sandboxes can detect API hammering, however, and will flag such behavior as suspicious or even stop logging the questionable function altogether. A sandbox might also modify the running malware sample's...
behavior or take other actions to prevent API hammering from interfering with analysis. But if left undetected, API hammering can severely impact the sandbox’s ability to assess the malware.
**Unnecessary Process Spawning**
Like API hammering, unnecessary process spawning is a technique used to overwhelm sandboxes and malware analysts. The malware sample shown in Figure 8-3 spawns several hundred processes, all named `xxxx.tmp`, to hide its true activity.
<table>
<thead>
<tr>
<th>Time of Day</th>
<th>Process Name</th>
<th>PID</th>
<th>Operation</th>
</tr>
</thead>
<tbody>
<tr>
<td>2:45:01.3025251 PM</td>
<td>wWinWord.exe</td>
<td>2100</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:01.4770031 PM</td>
<td>w4CC3.tmp</td>
<td>2068</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:01.6280210 PM</td>
<td>w4D50.tmp</td>
<td>1904</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:01.7751459 PM</td>
<td>w4DFC.tmp</td>
<td>2116</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:01.8930741 PM</td>
<td>w4E38.tmp</td>
<td>1936</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.0401059 PM</td>
<td>w4F15.tmp</td>
<td>2132</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.1631946 PM</td>
<td>w4FA1.tmp</td>
<td>2084</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.3187729 PM</td>
<td>w501E.tmp</td>
<td>1920</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.4344296 PM</td>
<td>w50BB.tmp</td>
<td>284</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.5940949 PM</td>
<td>w5138.tmp</td>
<td>2172</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.7113267 PM</td>
<td>w51D4.tmp</td>
<td>892</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:02.8990254 PM</td>
<td>w5251.tmp</td>
<td>1124</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.0302509 PM</td>
<td>w52ED.tmp</td>
<td>2176</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.1843058 PM</td>
<td>w5389.tmp</td>
<td>1784</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.3156597 PM</td>
<td>w5426.tmp</td>
<td>1980</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.4443518 PM</td>
<td>w5A3.tmp</td>
<td>1576</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.5631673 PM</td>
<td>w5520.tmp</td>
<td>2216</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.6857043 PM</td>
<td>w559D.tmp</td>
<td>2404</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:03.8027174 PM</td>
<td>w561A.tmp</td>
<td>2440</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:04.3563369 PM</td>
<td>w5687.tmp</td>
<td>2468</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:04.5232190 PM</td>
<td>w588B.tmp</td>
<td>2500</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:04.6129748 PM</td>
<td>w5946.tmp</td>
<td>2496</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:04.9622704 PM</td>
<td>w5A6F.tmp</td>
<td>2476</td>
<td>Process Create</td>
</tr>
<tr>
<td>2:45:05.2665695 PM</td>
<td>w5B8B.tmp</td>
<td>2516</td>
<td>Process Create</td>
</tr>
</tbody>
</table>
Figure 8-3: Malware spawning a large number of “dummy” processes
Because of the staggering number of processes the malware creates, it’s difficult for the analyst to identify which ones are worth investigating. Sandboxes may also be overwhelmed by all the data.
**Decoy Network Communication**
Some malware variants send fake or decoy network traffic to attempt to conceal the real malicious traffic. One malware family, Formbook, is well known for using this technique. Formbook connects to a randomized list of several decoy web addresses and one actual command and control (C2) address, which can confuse analysts and sandboxes. In some cases, these decoy addresses are real domains that can lead the malware analyst down...
the wrong paths during the investigation. Figure 8-4 shows Formbook connecting to multiple decoy C2 addresses using normal HTTP GET requests.
Figure 8-4: Formbook connecting to decoy C2 addresses
As you can see, all of the traffic looks almost identical, but only one of these connections is for the real C2 server.
NOTE
You can download the Formbook malware from VirusTotal or MalShare using the following file hash:
SHA256: 08ef1473879e6e8197f1eadfe3e51a9dbdc9c892442b57a3186a64ecc9d1e41
Anti-hooking
Many malware analysis sandboxes and tools use API hooking, or simply hooking, to analyze malware behavior. This involves injecting a piece of code, called a hook, into the malware’s memory space. The hook then intercepts API function calls, redirects them to a different function or modifies their behavior, and passes them on to the original function. This hook is often a module, typically in the form of a DLL, that then monitors the sample as it runs (see Figure 8-5).
Figure 8-5: A sandbox hooking a running malware process
In this example, a sandbox has hooked the running malware’s process (hooked malware) via DLL injection (the sandbox hooking DLL). The sandbox modifies the first few bytes of the function it’s hooking (inside user32.dll) and inserts a jump (jmp) instruction. Now any calls to the function in the user32.dll library will jump to the hook code in the sandbox hooking DLL. The installed hook allows the sandbox to intercept and monitor function calls and potentially modify the function call parameters or return values.
To implement a hook, the sandbox agent inserts a jump statement into the beginning of a function it wishes to hook. The following assembly code excerpt shows the first few bytes of the ReadFile function after it has been hooked by a sandbox:
```
0x77000000 jmp hook_code
0x77000005 // Start of real ReadFile code
```
In this hooked code, the inserted jump statement will ensure that when the malware calls the ReadFile function, the execution flow will transfer to the sandbox hook code (hook_code) before executing the real ReadFile code. This type of hook is called inline hooking. Sandboxes use a technique called process injection to inject inline hooks into target processes. We’ll discuss injection and various types of hooking in more detail in Chapter 12.
Some analysis tools, such as API Monitor and certain debugger plug-ins, use hooks for similar purposes. One example is the popular tool ScyllaHide, which can be used to circumvent anti-debugging techniques in malware. (Chapter 10 will cover ScyllaHide in greater detail.) In this section, we’ll dig deeper into some of the ways in which malware can detect and circumvent hooking and monitoring.
**Hook Detection**
Before executing, malware will likely try to detect whether it’s being hooked by a sandbox or an analysis tool by scanning its own memory for these injected hooking modules. In Chapter 7, you saw how malware can call functions such as Module32First and Module32Next to enumerate its loaded modules. For hook detection, the malware sample may keep track of which modules it will load, so if it enumerates its loaded modules and notices an anomalous loaded module, it may assume that it’s being hooked or otherwise monitored.
Before executing its target function, malware can check whether a sandbox has modified that function’s code in an attempt to hook it. To accomplish this, the malware invokes either ReadProcessMemory or NtReadVirtualMemory to read the memory where the suspect function resides, and then it inspects the first few bytes of the function. The malware will be on the lookout for anomalous jump instructions that have been inserted into the beginning of the function in question, a sure sign of hooking, as the following pseudocode illustrates:
This malware's code first obtains a handle for ntdll.dll and the address for NtAllocateVirtualMemory. The code then invokes ReadProcessMemory to inspect the first byte of the NtAllocateVirtualMemory function. If the first byte is a jump instruction (hex E9), then the malware assumes that NtAllocateVirtualMemory is hooked and that it's being monitored by a sandbox or analysis tool. Note that malware could also search for other types of jump instructions, as discussed in Chapter 3.
We'll come back to this technique in a moment in “Performing Unaligned Function Calls” on page XX.
**Hook Removal (Unhooking)**
After detecting a hook, the malware sample can attempt to remove it by restoring the original data. There are a few ways in which malware can attempt to do this.
First, malware can manually unload any suspicious modules (injected hooking DLLs) that it determines have been loaded into its process address space. Once it detects an anomalous module, it can call the FreeLibrary function. FreeLibrary takes as a parameter the handle of the library module the malware wishes to unload.
A possibly better way for malware to accomplish this unhooking is by manually reloading Windows libraries that appear to be hooked. Malware can scan its loaded libraries for signs of a hooking module, and once it detects a hook, it can unload that DLL (using a function such as FreeLibrary) and then reload the fresh, unhooked library from disk. This effectively removes any function hooks installed by the sandbox or analysis tool.
Alternatively, once the malware detects that a function is hooked, it can simply rewrite the original code into the function, replacing the jump to the hooking code. To unhook an inline hook, the malware can simply remove the hooked bytes of the function (the jump statement) or overwrite them with something else, as the following pseudocode demonstrates:
```python
handle = GetModuleHandle("ntdll.dll")
functionAddress = GetProcAddress(handle, "NtAllocateVirtualMemory")
ReadProcessMemory(GetCurrentProcess(), functionAddress, buffer, bufferSize, &bytesRead)
if (buffer[0] == 0xE9)
// Function hooked
return true
handle = GetModuleHandle("ntdll.dll")
functionAddress = GetProcAddress(handle, "NtAllocateVirtualMemory")
VirtualProtect(functionAddress, size, PAGE_EXECUTE_READWRITE, &oldProtect)
memcpy(functionAddress, "\x4c\x8b\xd1\xb8", 4)
VirtualProtect(functionAddress, size, oldProtect, &newOldProtect)
```
Evading Sandboxes and Disrupting Analysis 139
In this code, the malware gets the address (GetProcAddress) of the library and function it wishes to unhook (in this case, NtAllocateVirtualMemory), then calls VirtualProtect to prepare the function for modification by giving it execute, read, and write permissions. Then, the malware copies (memcpy) the four bytes (\x4c\x8b\xdb\xb8) to the beginning of the target function’s code. These bytes are the standard, unhooked, original bytes that would reside in the target function before they were hooked by the sandbox. Finally, the malware calls VirtualProtect again to change the memory permissions back to what they originally were.
Some sandboxes are aware that malware can try to unhook their installed function hooks and will be on the lookout for this. Similar to how malware scans its process memory for signs of hooking, sandboxes can periodically check whether their hooks are still in place and, if not, replace them. Or, sandboxes may monitor malware unhooking behaviors, such as by monitoring calls to ReadProcessMemory, WriteProcessMemory, memcpy, FreeLibrary, and others.
Next, let’s discuss a subtler approach that malware can take to get around sandbox hooks: hook circumvention.
**Hook Circumvention**
As opposed to hook removal, *hook circumvention* bypasses or prevents hooking altogether. Examples of hook circumvention techniques include calling Windows functions in abnormal ways and manually loading code libraries (thus sidestepping the normal library-loading process). Since some sandboxes can detect whether their hooks are removed or altered, these methods can be less noisy and more difficult to detect.
**Performing Unaligned Function Calls**
In *unaligned* function calling, the malware indirectly calls functions by jumping over the sandbox hooking code, effectively skipping it entirely. Normally, malware will call a Windows API function, such as ReadFile, by using a call instruction (call ReadFile). This instruction will jump to the beginning of ReadFile (inside the kernel32.dll module) and execute this code. If the ReadFile function is hooked by a sandbox, however, the hooking code will be executed first, as discussed earlier in this chapter. In the following code, a hook has been injected into this function:
```
0x77000000 jmp hook_code
0x77000005 // Start of real ReadFile code.
```
To implement an unaligned function call, the malware can directly jump to the 0x77000005 address by executing the instruction `jmp 0x77000005` (or adding 5 bytes to the base address, as in `jmp 0x77000000 + 0x5`), rather than calling ReadFile normally. This will skip the hooking jmp statement at 0x77000000 and directly execute the real ReadFile code starting at 0x77000005.
One caveat here is that the malware must explicitly specify the function address, meaning it must know that address beforehand. One way the
malware can obtain the address is by calling `GetProcAddress`, as shown in this simplified assembly code:
```asm
--snip--
call GetProcAddress
mov address, eax
cmp [address], 0E9h
je skip_hook
--snip--
skip_hook:
lea eax, [address+5]
jmp eax
```
The malware sample calls `GetProcAddress` to get the address of its desired target function, and it then stores that value in `address (mov address, eax)`. The address points to the beginning of the function, where the malware is checking for hooks. Next, the malware compares the code at this address to the hex value `0E9h` (one of the assembly opcodes for `jmp`). If this opcode exists, the code jumps to the `skip_hook` function, which adds 5 bytes to the address of the target function and stores the pointer to this final address in EAX (`lea eax, [address+5]`). Finally, the code jumps to this new address (`jmp eax`), bypassing the hook.
### Calling Low-Level and Uncommon Functions
To circumvent hooking behaviors in sandboxes and analysis tools, some malware invokes lower-level Native API calls, attempting to avoid the more commonly hooked higher-level calls. For example, malware can call the `NtProtectVirtualMemory` function directly, rather than calling `VirtualProtect` in an attempt to bypass any hooks on the latter.
Alternatively, malware can even make direct syscalls into the kernel, bypassing the normal WinAPI calling procedures. (We discussed syscalls in Chapter 1.) Some sandboxes may not monitor direct calls into the kernel, and that can leave blind spots in the analysis reports from these sandboxes. As this is also a technique used to circumvent endpoint defenses, we’ll return to this topic in detail in Chapter 13.
Since automated sandboxes and some malware analysis tools hook or monitor the common Windows functions, malware may also use uncommon functions as a hook circumvention tactic. The Windows API contains a huge number of functions that cover nearly every task a program could want to complete, so inevitably, there are rarely used and near-duplicate functions. For example, the `SHEnumKeyEx` function is very similar to `RegEnumKey` and can also be used to enumerate registry keys, but it’s far less commonly used. Thus, `SHEnumKeyEx` may receive less attention from automated sandboxes and analysts and may go unnoticed when used by malware to thwart hooking attempts.
Unfortunately, providing a list of all of these lesser-used functions is impossible since the Windows API is so extensive. However, it’s important
to keep this tactic in mind when investigating malware and researching any API calls you’re unfamiliar with.
**SOCKETS**
Most modern Windows applications use higher-level network communication libraries such as WinInet (`Wininet.dll`), WinHTTP (`Winhttp.dll`), and URLMon (`Urlmon.dll`). These are also some of the internet communication libraries most commonly loaded by malware; in fact, most of the malware examples throughout this book use these libraries. The primary benefit of these libraries for malware authors is their ease of use and simple implementation.
That said, some malware uses the lower-level library Winsock instead. With Winsock, malware authors have greater flexibility in the way they craft and manipulate their network connections. Additionally, because they operate at a lower level than the previously mentioned libraries, Winsock functions may fly under the radar of analysis tools like web proxies, and analysts can therefore miss some malware behaviors. The following pseudocode demonstrates how a malware sample might create a socket and send data to a remote server:
```
int sock = socket("AF_INET", 1, 6);
int connect(sock, *sockaddr, length);
send(sock, *data, strlen(*data), 0);
```
In this basic example, the malware sample creates a socket (`sock`) with parameters specifying that it should use IPv4 (`AF_INET`), connection-based byte streams (`1`), and the TCP protocol (`6`). Next, the malware attempts to connect to a remote server (`connect`), specifying `sock` and a pointer to the `sockaddr` table, which contains information about the remote service, such as the hostname and TCP port number. Finally, the malware sends data (`send`) to the remote server, specifying a pointer to `data`, which contains the data that the malware wishes to send to the remote server.
The details of sockets and how they work are beyond the scope of this book. For more information on sockets and all their possible parameters, MSDN is a great resource.
**Manually Loading Libraries and Calling Functions**
Malware can also manually load Windows libraries, rather than relying on the standard Windows loader. As you may recall from Chapter 1, the standard way in which Windows applications load libraries is by using functions such as `LoadLibrary`. The `LoadLibrary` function maps the requested library into memory, making for a quick and simple loading process, with the OS doing all the heavy lifting. The downside to this simplicity is that sandboxes
Evading Sandboxes and Disrupting Analysis
and other analysis tools can easily implement hooks within this library to intercept function calls.
To circumvent this, malware can manually map the library file into its process address space by using \texttt{NtMapViewOfSection}, as shown in this simplified pseudocode:
```plaintext
file_name = "C:\Windows\System32\Ntdll.dll"
NtCreateFile(file_handle, ..., file_name, ...)
NtCreateSection(section_handle, ..., file_handle)
NtMapViewOfSection(section_handle, process_handle, ...)
```
In this example, the malware uses \texttt{NtCreateFile} to get a handle to the file \texttt{C:\Windows\System32\Ntdll.dll}, which is the library it wishes to load. Next, the malware creates a section object using \texttt{NtCreateSection} and references the previously obtained file handle. A \textit{section object} is a section of memory that can be shared with other processes, and it provides a method of mapping a file into this area of memory. After the section object is created, the malware maps the \texttt{ntdll.dll} file into it using \texttt{NtMapViewOfSection}. The \texttt{process_handle} variable represents the target process into which the file will be mapped. In this case, it’s the malware’s own process.
Another similar method is to read the file from disk, rather than mapping it into memory. To read \texttt{ntdll.dll} from disk, the malware can call \texttt{ReadFile} (or \texttt{NtReadFile}) and pass the target filename as a parameter. With either of these methods, once the library is mapped or read into memory, the malware can execute its intended functions by jumping to or calling the addresses in the target library.
\textbf{Writing Custom Functions}
Finally, malware authors may choose to rewrite Windows functions entirely and include them in their malware samples to avoid hooking. This is often the most difficult hook circumvention technique to implement; many factors come into play, and the modified function must work perfectly with the victim host’s operating system. It’s quite rare to see this malware approach in practice.
\textbf{Anti-hooking Toolsets}
There are also tools written specifically for anti-hooking purposes. One example is the appropriately named anticuckoo project (\url{https://github.com/David-Reguera-Garcia-Dreg/anticuckoo}), which detects potential sandbox hooking by using various methods. Additionally, the tool allows users to exploit the sandbox by modifying the hooked function’s code and possibly causing a memory stack corruption, thus causing the sandbox to crash. This project doesn’t seem to be maintained anymore, but it’s a good example of research on the topic of sandbox anti-hooking. For additional information on this technique, search for the informative blog post “Prevalent Threats Targeting Cuckoo Sandbox Detection and Our Mitigation” from the researchers at Fortinet (\url{https://www.fortinet.com}).
Malware analysis is a cat-and-mouse game. Malware authors and offensive-security researchers consistently come up with new ways to detect and circumvent hooking, so malware analysts and sandbox developers must adapt. For example, the Cuckoo sandbox authors implemented several anti-anti-hooking techniques, such as preventing hooks from being overwritten by restricting memory protection modification. Many other commercial sandboxes have implemented similar functionalities.
Circumventing Sandbox Analysis
Because they’re automated, sandboxes are susceptible to evasion tactics at the meta level, by which I mean the level of the sandbox product itself, not its implementation or the underlying OS. For example, certain sandboxes have a size limit on submitted files, so malware authors can simply artificially increase the size of the malware file to circumvent them. Other sandboxes can’t process certain file types or scripts. It’s becoming more common for malicious files to be delivered via email in an encrypted state, with the decryption password in the text of the email. An end user may happily enter this password, decrypt the file, and run the malware, but a sandbox has a much more difficult time with this!
Also, some sandboxes have trouble monitoring certain file types. At the time of this writing, many commercial and open source sandboxes don’t fully support Microsoft .NET, which is a cross-platform development framework for Windows. Since .NET implements its own functions that differ from the native Windows and NT API functions, these sandboxes may miss important details about the malware’s behaviors and functionalities.
These are just a few examples, and there are many other methods of tricking sandboxes into not executing the malware at all. Keep this in mind when analyzing malware in an automated sandbox, and always be on the lookout for the evasion techniques listed here. It’s also important to properly evaluate a sandbox product to ensure it fits your needs before you deploy it in your environment.
Disrupting Manual Investigation
The techniques discussed in this chapter so far have focused on evading sandboxes, but malware can also directly interfere with manual analysis. For example, Chapter 4 described how malware can enumerate the processes running on a host so that it can detect a sandbox environment, a VM, or analysis tooling. However, along with detecting these tools, some malware can actively terminate them.
To terminate a target process, malware can iterate through the process tree by using CreateToolhelp32Snapshot, Process32First, and Process32Next, as you saw in Chapter 4. The malware can then call OpenProcess to obtain a handle to a victim process, followed by TerminateProcess. The following
assembly code example demonstrates how a malware sample might terminate a remote process:
```assembly
--snip--
push [ebp+dwProcessId] ; PID of "wireshark.exe"
push 0 ; bInheritHandle
push 0x1 ; dwDesiredAccess
call OpenProcess
mov [ebp+ProcessHandle], eax
xor eax, eax
--snip--
push [ebp+ProcessHandle]
call TerminateProcess
```
In this code snippet, the malware calls `OpenProcess` with parameters representing the processID of the target process (`wireshark.exe`, in this case), the InheritHandle value (which isn’t important here), and the dwDesiredAccess value (the process access rights that the malware’s process is requesting). In this case, the malware is requesting access rights 1 (0x1 in hex), which equates to `PROCESS_TERMINATE` and allows a calling process (the malware) to terminate another process (`wireshark.exe`). Wireshark is, of course, just an example here. Malware can query and terminate any process if it has the correct permissions to do so.
**NOTE**
Sometimes renaming a malware analysis tool’s executable file before launching it will trick simple malware that’s employing this method. For example, renaming `wireshark.exe` to `krahseriw.exe` might prevent malware from “seeing” this process, thus preventing its termination. This solution won’t work in all cases, however.
Another tactic malware can use is disorienting the analyst. One interesting malware sample I’ve investigated creates a directory under `C:\Users\<user>\AppData\Local\Temp`. The malware names the directory a randomly generated number (for example, `21335493`) and writes temporary files that are necessary to its functionalities into it. In order to protect the directory, the malware constantly enumerates all open windows, looking specifically for windows that reference this temporary directory name, and issues a “kill” request for the window if there’s a match.
Here’s a simplified pseudocode example of this technique in action:
```python
windows[] = EnumWindows()
for (i = 0; i < windows[].length; i++) {
window_text = GetWindowText(windows[i])
if (window_text == "21335493") {
PostMessage(windows[i], WS_Close)
}
}
```
This malware sample uses `EnumWindows` to enumerate all desktop windows and then loops through all the window title text, using `GetWindowText`, to look for `21335493`. If the code finds a window containing this text, the
malware calls the PostMessage function with the WS_CLOSE parameter, forcing that window to close. Now, if the malware analyst tries to open the 21335493 temporary directory in, say, Windows Explorer, it will be closed automatically before the analyst can inspect its contents.
These two examples only scratch the surface. Starting in Chapter 10, I’ll discuss other interesting and novel measures that malware authors can implement in their code to confuse and impede manual analysis.
**Hypervisor Exploits and VM Escaping**
The last technique we’ll cover in this chapter may be the ultimate sandbox and VM evasion move: exploiting the hypervisor itself or escaping it entirely. While it’s rarely seen in malware, there have been occasional uses of this technique in the wild, as well as the odd vulnerability discovered in products such as VMware and VirtualBox. One notable example is Cloudburst, an exploit developed in 2009 by Immunity Inc. that affected certain versions of VMware hypervisors. Playing a specially crafted video file on the Windows VM would exploit a flaw in VMware’s display functions and possibly allow code to execute on the host OS itself.
Most known hypervisor vulnerabilities don’t directly allow code execution on the host, meaning that complete “escape” from the sandbox environment is unlikely. For example, some of these vulnerabilities allow for writing files to the host or possibly reading files from the host, but they won’t allow malicious files or code to be executed on the host. In addition, at the time of this writing, all of these discovered and reported vulnerabilities have been patched by their respective hypervisor vendors. As long as you, the malware analyst, are detonating malware on an updated and patched hypervisor, your host system is theoretically safe.
**NOTE**
I say “theoretically” here because there’s always the possibility of zero-day vulnerabilities and unknown, unreported bugs in hypervisor code that malware could potentially exploit. There’s always a risk when you’re analyzing malware, but I believe any risk is outweighed by the benefits. In Chapter 18, we’ll discuss a few steps you can take to ensure you’re working in the safest environment possible.
**Evasion Countermeasures**
As mentioned earlier, there’s a cat-and-mouse game between malware authors and malware researchers: authors invent a novel technique for detecting or bypassing analyst tools and sandboxes, and analysts and defensive-security researchers adapt. A great example of this is how far automated-analysis sandboxes have come. Many modern sandboxes have implemented countermeasures for the detection and evasion tactics mentioned throughout the past few chapters.
Sandboxes can alert malware analysts to detection and evasion attempts, providing a window into the malware internals and enabling the analysts to respond appropriately. You can manually circumvent many such
techniques by attaching the process to a debugger, setting breakpoints on interesting function calls, and modifying the malware's code in the debugger itself or in a disassembler. These function calls can be nop’ed out, jumped over, or modified (by manipulating the function parameters or return values, as Chapter 3 explained). Finally, many of the techniques can be circumvented by properly configuring your VM and hypervisor. I’ll discuss how to do so in Chapter 18.
Summary
This chapter gave you an overview of the methods that malware might use to evade sandboxes, VM environments, and analysis tooling when it detects that it’s being monitored. In Part III, you’ll build on some of this knowledge as we begin to explore how malware uses anti-reversing techniques to interfere with disassemblers, detect and evade dynamic code analysis tools like debuggers, and misdirect malware analysts.
|
{"Source-Url": "https://nostarch.com/download/9781718503267_SampleChapter.pdf", "len_cl100k_base": 9604, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 43861, "total-output-tokens": 10457, "length": "2e13", "weborganizer": {"__label__adult": 0.0013866424560546875, "__label__art_design": 0.0009403228759765624, "__label__crime_law": 0.0245208740234375, "__label__education_jobs": 0.002460479736328125, "__label__entertainment": 0.0003895759582519531, "__label__fashion_beauty": 0.0004227161407470703, "__label__finance_business": 0.0006909370422363281, "__label__food_dining": 0.0004880428314208984, "__label__games": 0.005680084228515625, "__label__hardware": 0.004291534423828125, "__label__health": 0.0008320808410644531, "__label__history": 0.0007138252258300781, "__label__home_hobbies": 0.00037932395935058594, "__label__industrial": 0.0011920928955078125, "__label__literature": 0.0012912750244140625, "__label__politics": 0.000858306884765625, "__label__religion": 0.0008864402770996094, "__label__science_tech": 0.09588623046875, "__label__social_life": 0.0002765655517578125, "__label__software": 0.2139892578125, "__label__software_dev": 0.64111328125, "__label__sports_fitness": 0.00045680999755859375, "__label__transportation": 0.0005164146423339844, "__label__travel": 0.00026106834411621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42283, 0.04506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42283, 0.76963]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42283, 0.87541]], "google_gemma-3-12b-it_contains_pii": [[0, 531, false], [531, 3064, null], [3064, 5351, null], [5351, 7815, null], [7815, 9138, null], [9138, 11524, null], [11524, 13288, null], [13288, 16235, null], [16235, 17274, null], [17274, 20041, null], [20041, 22546, null], [22546, 25401, null], [25401, 27915, null], [27915, 30403, null], [30403, 33326, null], [33326, 36087, null], [36087, 38465, null], [38465, 41387, null], [41387, 42283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 531, true], [531, 3064, null], [3064, 5351, null], [5351, 7815, null], [7815, 9138, null], [9138, 11524, null], [11524, 13288, null], [13288, 16235, null], [16235, 17274, null], [17274, 20041, null], [20041, 22546, null], [22546, 25401, null], [25401, 27915, null], [27915, 30403, null], [30403, 33326, null], [33326, 36087, null], [36087, 38465, null], [38465, 41387, null], [41387, 42283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42283, null]], "pdf_page_numbers": [[0, 531, 1], [531, 3064, 2], [3064, 5351, 3], [5351, 7815, 4], [7815, 9138, 5], [9138, 11524, 6], [11524, 13288, 7], [13288, 16235, 8], [16235, 17274, 9], [17274, 20041, 10], [20041, 22546, 11], [22546, 25401, 12], [25401, 27915, 13], [27915, 30403, 14], [30403, 33326, 15], [33326, 36087, 16], [36087, 38465, 17], [38465, 41387, 18], [41387, 42283, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42283, 0.10317]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
552e3c1e18b34c99e30e5c8346fdeb3517b168d5
|
FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks
Bill Yuchen Lin*, Chaoyang He*, Zihang Zeng, Hulin Wang, Yufen Huang, Mahdi Soltanolkotabi, Xiang Ren*, Salman Avestimehr*
University of Southern California
{yuchen.lin, chaoyang.he, saltanol, xiangren, avestime}@usc.edu
Abstract
Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks. Federated learning (FL) provides promising approaches for a large number of clients (e.g., personal devices or organizations) to collaboratively learn a shared global model to benefit all clients while allowing users to keep their data locally. Despite interest in studying FL methods for NLP tasks, a systematic comparison and analysis is lacking in the literature. Herein, we present the FedNLP, a benchmarking framework for evaluating federated learning methods on four different task formulations: text classification, sequence tagging, question answering, and seq2seq. We propose a universal interface between Transformer-based language models (e.g., BERT, BART) and FL methods (e.g., FedAvg, FedOPT, etc.) under various non-IID partitioning strategies. Our extensive experiments with FedNLP provide empirical comparisons between FL methods and help us better understand the inherent challenges of this direction. The comprehensive analysis points to intriguing and exciting future research aimed at developing FL methods for NLP tasks.
1 Introduction
Fine-tuning large pre-trained language models (LMs) such as BERT (Devlin et al. 2019) often leads to state-of-the-art performance in many realistic NLP applications (e.g., text classification, named entity recognition, question answering, summarization, etc.), when large-scale, centralized training datasets are available. However, due to the increasing concerns and regulations about data privacy (e.g., GDPR (Regulation 2016)) emerging data from realistic users have been much more fragmented and distributed, forming decentralized private datasets of multiple “data silos” (a data silo can be viewed as an individual dataset) — across different clients (e.g., organizations or personal devices).
To respect the privacy of the users and abide by these regulations, we must assume that users’ data in a silo are not allowed to transfer to a centralized server or other clients. For example, a client cannot share its private user data (e.g., documents, conversations, questions asked on the website/app) with other clients. This is a common concern for organizations such as hospitals, financial institutions or legal firms, as well as personal computing devices such as smartphones, virtual assistants (e.g., Amazon Alexa, Google Assistant, etc.), or a personal computer. However, from a machine learning perspective, models trained on a centralized dataset that combine the data from all organizations or devices usually result in better performance in the NLP domain. Therefore, it is of vital importance to study NLP problems in such a realistic yet more challenging scenario —i.e., training data are distributed across different clients and cannot be shared for privacy concerns.
The nascent field of federated learning (et al 2019; Li et al. 2020b) (FL) aims to enable many individual clients to train their models jointly while keeping their local data decentralized and completely private from other users or a centralized server. A common training schema of FL methods is that each client sends its model parameters to the server, which updates and sends back the global model to all clients in each round. Since the raw data of one client has never
been exposed to others, FL is promising as an effective way to address the above challenges, particularly in the NLP domain, where many user-generated text data contain sensitive and/or personal information.
Despite the growing progress in the FL domain, research into and application for NLP has been rather limited. There are indeed a number of recent works on using FL methods for processing medical information extraction tasks (Sui et al. 2020). However, such prior work usually has its own experimental setup and specific task, making it difficult to fairly compare these FL methods and analyze their performance in other NLP tasks. We argue that future research in this promising direction (FL for NLP) would highly benefit from a universal benchmarking platform for systematically comparing different FL methods for NLP. To the best of our knowledge, such a benchmarking platform is still absent from the literature.
Therefore, our goal in this paper is to provide comprehensive comparisons between popular FL methods (e.g., FedAvg (McMahan et al. 2017a), FedOPT (Reddi et al. 2020), FedProx (Li et al. 2020c)) for four mainstream formulations of NLP tasks: text classification, sequence tagging, question answering, and seq2seq generation. Although there are few available realistic FL datasets for NLP due to privacy concerns, we manage to use existing NLP datasets to create various non-IID data partitions over clients. These non-IID partitions simulate various kinds of distribution shifts (e.g., label, features, quantities, etc.) over the clients, which often happen in real-world NLP applications. As for the base NLP models, we use the Transformer architecture (Vaswani et al. 2017) as the backbone and support a wide range of pre-trained LMs such as DistilBERT (Sanh et al. 2019), BERT (Devlin et al. 2019), BART (Lewis et al. 2020), etc. In order to conduct extensive experiments, we need to support the experiments with multiple options on dimensions such as (1) task formulations, (2) NLP models, (3) FL algorithms, and (4) non-IID partitions. Therefore, we propose FedNLP, a modular framework with universal interfaces among the above four components, which is thus more extensible for supporting future research in FL for NLP.
In summary, we aim to unblock the research of FL for NLP with the following two-fold contributions:
• **Evaluation and analysis.** We systematically compare popular federated learning algorithms for mainstream NLP task formulations under multiple non-IID data partitions, which thus provides the first comprehensive understanding. Our analysis reveals that there is a considerably large gap between centralized and decentralized training under various settings. We also analyze the efficiency of different FL methods and model sizes. With our analysis, we highlight several directions to advance FL for NLP.
• **Resource.** The implementation of our experiments forms a general open-source framework, FedNLP, which is capable of evaluating, analyzing, and developing FL methods for NLP. We also provide decentralized NLP datasets of various task formulations created by various non-IID partitioning strategies for future research.
The remainder of this paper is structured as follows. We introduce the background knowledge of federated learning and several typical FL algorithms in §2. Then, we present a few proposed non-IID partitioning strategies to create synthetic datasets for different task formulations in §3. We present our results, analysis, and findings in §4. Finally, we discuss more related work (§5) and conclusions (§6).
## 2 Federated Learning for NLP
In this section, we first introduce the background knowledge of federated learning (FL) in the context of NLP tasks. Then, we briefly illustrate a unified FL framework that can be generalized to other typical algorithms. Finally, we introduce our framework design that is used for our benchmarking experiments and form a general training pipeline for FL+NLP.
### 2.1 Federated Learning Concepts
**Federated learning (FL)** is a machine learning paradigm where multiple entities (clients) collaborate in solving a machine learning problem under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objectives (Kairouz et al. 2019). Therefore, federated learning has been seen as a promising direction to decrease the risk of attack and leakage, reduce the difficulty and cost of data movement, and meet the privacy-related data storage regulations.
In the basic conception of federated learning, we would like to minimize the objective function,
\[
F(x) = \mathbb{E}_{i \sim P}[F_i(x)],
\]
where \(F_i(x) = \mathbb{E}_{\xi \sim D_i}[f_i(x, \xi)].\) (1)
\(x \in \mathbb{R}^d\) represents the parameter for the global model, \(F_i : \mathbb{R}^d \rightarrow \mathbb{R}\) denotes the local objective function at client \(i\), and \(P\) denotes a distribution on the collection of clients \(I\). The local loss functions \(f_i(x, \xi)\) are often the same across all clients, but the local data distribution \(D_i\) will often vary, capturing data heterogeneity.
**Federated averaging (FedAvg)** (McMahan et al. 2017a) is a common algorithm to solve (1) by dividing the training process into rounds. At the beginning of the \(t\)-th round \((t \geq 0)\), the server broadcasts the current global model \(x^{(t)}\) to a cohort of participants: a random subset of clients from \(S^{(t)}\) which includes \(M\) clients in total. Then, each sampled client in the round’s cohort performs \(\tau_i\) local SGD updates on its own local dataset and sends the local model changes \(\Delta_i^{(t)} = x_i^{(t, \tau_i)} - x_i^{(t)}\) to the server. Finally, the server uses the aggregated \(\Delta_i^{(t)}\) to update the global model:
\[x^{(t+1)} = x^{(t)} + \frac{\sum_{i \in S^{(t)}} p_i \Delta_i^{(t)}}{\sum_{i \in S^{(t)}} p_i},\]
where \(p_i\) is the relative weight of client \(i\). The above procedure will repeat until the algorithm converges. In the cross-silo setting where all clients participate in training on every round (each cohort is the entire population), we have \(S^{(t)} = \{1, 2, \ldots, M\}\). Consequently, we can learn a global model to benefit all clients while preserving their data privacy.
2.2 Federated Optimization Framework
In this work, we propose to use FedOPT (Reddi et al. 2020), a generalized version of FedAvg, to build the FedNLP platform. As the pseudo-code presented in Algorithm 1, the algorithm is parameterized by two gradient-based optimizers: CLIENTOPT and SERVEROPT with client learning rate $\eta$ and server learning rate $\eta_s$, respectively. While CLIENTOPT is used to update the local models, SERVEROPT treats the negative of aggregated local changes $-\Delta(t)$ as a pseudo-gradient and applies it to the global model. This optimization framework generalizes to many aggregation-based FL algorithms and simplifies the system design.
In terms of optimization, we explore different combinations of SERVEROPT and CLIENTOPT. The original FedAvg algorithm implicitly sets SERVEROPT and CLIENTOPT to be SGD, with a fixed server learning rate $\eta_s$ of 1.0. FedProx (Li et al. 2020c), tackling statistical heterogeneity by restricting the local model updates to be closer to the initial (global) model, can be easily incorporated into this framework by adding L2 regularization for better stability in training. Moreover, given that AdamW (Loshchilov and Hutter 2019) is widely used in NLP, we set it for CLIENTOPT and let the SERVEROPT to be SGD with momentum to reduce the burden of hyper-parameter tuning.
2.3 FedNLP Training System: Security and Efficiency
Under the unified definition of federated learning in Algorithm 1, we design a training system to support the research of NLP in the FL paradigm. We highlight its core capabilities and design as follows.
Supporting diverse FL algorithms. FedNLP aims to enable flexible customization for future algorithmic innovations. We have supported a number of classical federated learning algorithms, including FedAvg (McMahan et al. 2017a), FedOPT (Reddi et al. 2020), and FedProx (Li et al. 2020c). These algorithms follow the same framework introduced in Algorithm 1. The algorithmic APIs are modularized: all data loaders follow the same format of input and output arguments, which are compatible with different models and algorithms and are easy to support new datasets; the method of defining the model and related trainer is kept the same as in centralized training to reduce the difficulty of developing the distributed training framework. For new FL algorithm development, worker-oriented programming reduces the difficulty of message passing and definition. More details are introduced in Appendix C.3.
Enabling secure benchmarking with lightweight secure aggregation. In particular, FedNLP enhances the security aspect of federated training, which is not supported by existing non-NLP-oriented benchmarking libraries (e.g., TFF, LEAF). This is motivated by the fact that model weights from clients may still have the risk of privacy leakage (Zhu, Liu, and Han 2019). To break this barrier, we integrate secure aggregation (SA) algorithms to the FedNLP system. NLP researchers do not need to master security-related knowledge and also benefit from a secure distributed training environment. To be more specific, FedNLP supports state-of-the-art SA algorithms LightSecAgg, SecAgg (Bonawitz et al. 2017), and SecAgg+ (Bell et al. 2020). At a high-level understanding, SA protects the client model by generating a single random mask and allows their cancellation when aggregated at the server. Consequently, the server can only see the aggregated model and not the raw model from each client. In this work, our main effort is to design and optimize these SA algorithms in the context of the FedNLP system. We provide an algorithmic performance comparison in Appendix C.5.
Realistic evaluation with efficient distributed system design. FedNLP aims to support distributed training in multiple edge servers (e.g., AWS EC2) or edge devices (e.g., IoTs and smartphones). To achieve this, the system is designed with three layers: the application layer, the algorithm layer, and the infrastructure layer. At the application layer, FedNLP provides three modules: data management, model definition, and a single-process trainer for all task formats; at the algorithm layer, FedNLP supports various FL algorithms; at the infrastructure layer, FedNLP aims at integrating single-process trainers with a distributed learning system for FL. Specifically, we make each layer and module perform its own duties and have a high degree of modularization. We refer readers to Appendix C for a detailed description of the system architecture and design philosophy.
3 Benchmark for FedNLP
Here we introduce how we create benchmark datasets of a wide range of NLP tasks with different non-IID partition methods for evaluating different federated learning methods.
3.1 Task Formulations, Datasets, and Models
There are numerous NLP applications, but most of them can be categorized based on four mainstream formulations: text
classification (TC), sequence tagging (ST), question answering (QA), and seq2seq generation (SS). The formal definition of each formulation is detailed in Appendix §B. To cover all formulations while keeping our experiments in a reasonable scope, we select one representative task for each formulation:
- **Text Classification**: 20Newsgroup (Lang 1995) is a news classification dataset with annotations for 20 labels$^2$.
- **Sequence Tagging**: OntoNotes (Pradhan et al. 2013) (5.0) is a corpus where sentences have annotations for the entity spans and types. We use it for the named entity recognition task, which is fundamental to information extraction and other applications.
- **Seq2Seq**: Gigaword (DBL 2012) is a news corpus with headlines that is often used for testing seq2seq models as a summarization task. Other tasks such as dialogue response generation and machine translation can also be adapted to this format.
We show the basic statistics of the above selected datasets in Table 1. Note that our FedNLP as a research platform supports a much wider range of specific tasks of each formulation, while we only introduce the ones used in our experiments here with typical settings. Moreover, our contribution is more of a general FL+NLP benchmarking platform instead of particular datasets and partitions.
**Base NLP Models.** Fine-tuning pre-trained LMs has been the *de facto* method for modern NLP research, and thus we focus on testing Transformer-based architectures in FedNLP. Specifically, we choose to use BART (Lewis et al. 2020), a text-to-text Transformer model similar to the T5 model (Raffel et al. 2020), for seq2seq tasks.
<table>
<thead>
<tr>
<th>Task</th>
<th>Tlx.Cls.</th>
<th>Seq.Tag.</th>
<th>QA</th>
<th>Seq2Seq</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dataset</td>
<td>20News</td>
<td>Onto.</td>
<td>MRQA</td>
<td>Giga.</td>
</tr>
<tr>
<td># Training</td>
<td>11.3k</td>
<td>50k</td>
<td>53.9k</td>
<td>10k</td>
</tr>
<tr>
<td># Test</td>
<td>7.5k</td>
<td>5k</td>
<td>3k</td>
<td>2k</td>
</tr>
<tr>
<td># Labels</td>
<td>20</td>
<td>37</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>Metrics</td>
<td>Acc.</td>
<td>F-1</td>
<td>F-1</td>
<td>ROUGE</td>
</tr>
</tbody>
</table>
Table 1: Statistics of the selected datasets for our experiments. *37 is the size of the tag vocabulary.
3.2 Non-IID Partitioning Strategies
The existing datasets have been used for centralized training in NLP. As our focus here is to test decentralized learning methods, we need to distribute the existing datasets to a set of clients. It is the non-IIDness of the client distribution that makes federated learning a challenging problem. Thus, we extend the common practice widely used in prior works to the NLP domain for generating synthetic FL benchmarks (Li et al. 2021). We first introduce how we control the label distribution shift for TC and ST, then the quantity distribution shift, and finally how we model the distribution shift in terms of input features for non-classification NLP tasks (e.g., summarization).
**Non-IID Label Distributions.** Here we present how we synthesize the data partitions such that clients share the same (or very similar) number of examples, but have different label distributions from each other. We assume that on every client training, examples are drawn independently with labels following a categorical distribution over $L$ classes parameterized by a vector $q = (q_i \geq 0, i \in [1, L])$ and $\|q\|_1 = 1$. To synthesize a population of non-identical clients, we draw $q \sim \text{Dir}_L(\alpha p)$ from a Dirichlet distribution, where $p$ characterizes a prior class distribution over $L$ classes, and $\alpha > 0$ is a concentration parameter controlling the identialness among clients. For each client $C_j$, we draw a $q_j$ as its label distribution and then sample examples without replacement from the global dataset according to $q_j$. With $\alpha \to \infty$, all clients have identical distributions to the prior (i.e., uniform distribution); with $\alpha \to 0$, on the other extreme, each client holds examples from only one class chosen at random. As shown in Figure 2, we show a series heatmaps for visualizing the distribution differences between each client. Figure 3 shows an example of the concrete label distributions for all clients with different $\alpha$. We can see that when $\alpha$ is smaller, the overall label distribution shift becomes larger.
**Controlling non-IID Quantity.** It is also common that different clients have very different data quantities while sharing similar label distribution. We thus also provide a quantity-level Dirichlet allocation $z \sim \text{Dir}_N(\beta)$ where $N$ is the number of clients. Then, we can allocate examples in a global dataset to all clients according to the distribution $z = \{z_i\} = |D_i| / N$. If we would like to model both quantity and label distribution shift, it is also easy to com-
Table 2: The comparisons between different FL methods under the same setting on different NLP tasks. The number of workers per round are 10, except for the MRQA task, which uses 6.
<table>
<thead>
<tr>
<th>Task</th>
<th>Dataset</th>
<th>Partition</th>
<th>Clients</th>
<th>FedAvg</th>
<th>FedProx</th>
<th>FedOPT</th>
<th># Rounds</th>
</tr>
</thead>
<tbody>
<tr>
<td>Text Classification</td>
<td>20news</td>
<td>$\alpha = 1$ (label shift)</td>
<td>100</td>
<td>0.5142</td>
<td>0.5143</td>
<td>0.5349</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>22</td>
</tr>
<tr>
<td>Sequence Tagging</td>
<td>OntoNotes</td>
<td>$\alpha = 0.1$ (label shift)</td>
<td>30</td>
<td>0.7382</td>
<td>0.6731</td>
<td>0.7918</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>17</td>
</tr>
<tr>
<td>Question Answering</td>
<td>MRQA</td>
<td>natural factor</td>
<td>6</td>
<td>0.2707</td>
<td>0.2706</td>
<td>0.3280</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>13</td>
</tr>
<tr>
<td>Seq2Seq Generation</td>
<td>Gigaword</td>
<td>$\alpha = 0.1$ (feature shift)</td>
<td>100</td>
<td>0.3192</td>
<td>0.3169</td>
<td>0.3037</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>13</td>
</tr>
</tbody>
</table>
Natural Factors For datasets like MRQA, we consider a cross-silo setting where each client is associated with a particular sub-dataset (out of the six datasets of the same format), forming a natural distribution shift based on the inherent factors such as data source and annotating style.
4 Experiments and Analysis
In this section, we aim to analyze typical federated learning methods (introduced in our benchmark datasets with multiple dimensions with the base NLP models listed previously). We put more implementation details and additional results in Appendix. We organize our extensive experimental results and findings from the analysis as a collection of research questions with answers.
Experimental Setup and Hyper-parameters. We use DistilBERT and BART-base for most of our experiments, as the former is a distilled version of BERT model and has a 7x speed improvement over BERT-base on mobile devices — a common scenario for FL applications; the BART-base model is the most suitable option considering the trade-off between performance and computation cost. We leave our implementation details and the selected hyper-parameters in the submitted supplementary materials.
Our experiments cover both cross-device and cross-silo settings. As shown in Table 2, in the cross-device setting, we use uniform sampling to select 10 clients for each round when the client number in a dataset is very large (e.g., 100). For the cross-silo setting, each round will select the same number of clients (we use 6 for the QA task). The local epoch number is set to 1 for all experiments. To make our results reproducible, we use wandb.ai to store all experiment logs and hyper-parameters as well as running scripts.
Q1: How do popular FL methods perform differently under the same setting?
We compare the three typical FL methods under the same setting (i.e., data partition, communication rounds, training hyper-parameters, etc.) for each task formulation. As shown in Table 2, we report the results of FedAvg, FedProx, and FedOPT. We can see that overall FedOPT performs better than the other two methods, with the only exception being in the seq2seq generation task. FedAvg and FedProx performs similarly with marginal differences, but FedAvg outperforms FedProx in sequence tagging. These two exceptions are surprising findings, as many prior works in the FL community show that FedOPT is generally better than FedProx than FedAvg on vision tasks and datasets.
We conjecture that such inconsistent performance across tasks suggests the difference in terms of the loss functions have a great impact on FL performance. Seq2seq and sequence tagging tasks usually have more complex loss landscapes than text classification, as they are both typical structured prediction tasks, while the text classification has a much smaller output space. From Fig. 4, we see that the FedOPT outperforms the other two methods at the beginning while gradually become worse over time. This tells us that the use of AdamW as the client optimizer may not always be a good choice, especially for a complex task such as the Seq2Seq ones, as its adaptive method for scheduling learning rates might cause implicit conflicts. These observations suggest that federated optimization algorithms need to be tailored for various NLP tasks, and exploring FL-friendly model architecture or loss function can also be promising directions to address these challenges.
Q2: How do different non-IID partitions of the same data influence FL performance?
The FedNLP platform supports users to investigate the performance of a FL algorithm with a wide range of data partitioning strategies, as discussed in §3.2. Here we look at the training curves of the FedOPT on different partitions, as shown in Figure 5. We reveal several findings:
- When $\alpha$ is smaller (i.e., the partition is more non-IID in terms of their label distribution), the performance tends to degrade, based on the three curves ($\alpha = \{1, 5, 10\}$).
- The variance is also larger when the label distribution shift is larger. Both uniform and quantity-skew partitions have a smoother curve, while the variance is smaller for a larger $\alpha$ (e.g., 10).
- Quantity skew does not introduce a great challenge for federated learning when the label distribution is closer to the uniform one.
These findings suggest that it is important to design algorithms to mitigate the data heterogeneity. One promising direction is personalized FL, which enables each client to learn its own personalized model via adapting its local data distribution and system resources (Dinh, Tran, and Nguyen 2020; Fallah, Mohktari, and Ozdaglar 2020; Li et al. 2020a).
Q3: How does freezing of Transformers influence the FL performance?
Communication cost is a major concern in the federated learning process. It is thus natural to consider freezing some Transformer layers of the client models in order to reduce the size of the trainable parameters that will be transmitted between servers and clients. To study the influence of freezing layers on the FL performance, we conduct a series of experiments that freeze the layers from the embedding layer ($E$) to the top layer ($L_5$) of DistilBERT with both centralized training and FedOPT on the text classification task.
We report our results in Table 3 and Figure 6. We find that in centralized training, the largest performance gain happens when we unfreeze the last layer, while in FedOPT we have to...
unfreeze the last three layers to enjoy a comparable performance with the full model. This suggests that reducing communication costs via freezing some layers of Transformer LMs is feasible, though one should be aware that the experience in centralized training may not generalize to the FL experiments.
**Q4: Are compact model DistilBERT adequate for FL+NLP?**
We know that BERT has a better performance than DistilBERT for its larger model size. However, is it cost-effective to use BERT rather than DistilBERT? To study this, we compare the performance of both models with FedOPT on text classification, sharing the same setting as above experiments. As shown in Figure 7, although BERT-base achieves a better performance, the performance of DistilBERT is not significantly worse. Considering the communication cost (BERT-base is almost 2x larger than DistilBERT), we argue that using DistilBERT is a more cost-effective choice for both experimental analysis and realistic applications.
## 5 Related Work
### FL benchmarks and platforms
In the last few years a proliferation of frameworks and benchmark datasets have been developed to enable researchers to better explore and study algorithms and modeling for federated learning, both from academia: LEAF(Caldas et al. 2018), FedML (He et al. 2020a), Flower (Beutel et al. 2020), and from the industry: PySyft (Ryffel et al. 2018), TensorFlow-Federated (TFF) (Ingerman and Ostrowski 2019), FATIE (Yang et al. 2019), Clara (NVIDIA 2019), PaddleFL (Ma et al. 2019), Open FL (Intel® 2021). However, most platforms only focus on designing a unified framework for federated learning methods and do not provide a dedicated environment for studying NLP problems with FL methods. LEAF (Caldas et al. 2018) contains a few text datasets, however, it is limited to classification and next word prediction datasets and does not consider the pre-trained language models. We want to provide a dedicated platform for studying FL methods in realistic NLP applications with state-of-the-art language models.
### Federated learning in NLP applications
There are a few prior works that have begun to apply FL methods in privacy-oriented NLP applications. For example, federated learning has been applied to many keyboard-related applications (Hard et al. 2018; Stremmel and Singh 2020; Leroy et al. 2019; Ramaswamy et al. 2019; Yang et al. 2018a), sentence-level text intent classification using Text-CNN (Zhu et al. 2020), and pretraining and fine-tuning of BERT using medical data from multiple silos without fetching all data to the same place (Liu and Miller 2020). FL methods also have been proposed to train high quality language models that can outperform the models trained without federated learning (Ji et al. 2019; Chen et al. 2019). Besides these applications, some work has been done in medical relation extractions (Ge et al. 2020) and medical name entity recognition (Sui et al. 2020). These methods use federated learning to preserve the privacy of sensitive medical data and learn data in different platform, excluding the need for exchanging data between different platforms.
Our work aims to provide a unified platform for studying various NLP applications in a shared environment so that researchers can better design new FL methods either for a specific NLP task or as a general-purpose model. The aforementioned prior works would thus be a particular instance of the settings supported by the FedNLP platform.
## 6 Conclusion
We present FedNLP, an open-source benchmarking framework aiming to develop, evaluate, and analyze FL methods for NLP tasks. On top of FedNLP, we conduct extensive experiments covering three typical FL methods and four mainstream NLP task formulations under different non-IID partition methods. Our findings suggest that there is still a huge gap between centralized training and federated learning. From our analysis, there are a few observations that conflict with the conventional FL evaluation on non-NLP tasks because of the inherent complexity of structured prediction problems in NLP (e.g., seq2seq) — suggesting future directions on syncing learning rates for fine-tuning Transformer-based NLP models. We also empirically show the effect of fine-tuning different numbers of parameters of pre-trained models for reducing the cost of data transfer via freezing bottom layers. Finally, we have also suggested several future directions in the FL+NLP research.
7 Future Directions
Minimizing the performance gap. In the FL setting, we demonstrate that federated fine-tuning still has a large accuracy gap in the non-IID dataset compared to centralized fine-tuning. Developing algorithms for Transformer models with NLP tasks is of the highest priority.
Improving the system efficiency and scalability. Transformer models are usually large, while resource-constrained edge devices may not be able to run large models. Designing efficient FL methods for NLP tasks is thus a practical problem worth solving. How to adopt a reasonable user selection mechanism to avoid stragglers and speed up the convergence of training algorithms is also a pressing problem to be solved.
Trustworthy and privacy-preserving NLP. We argue that it is an important future research direction to analyze and assure the privacy-preserving ability of these methods, although our focus in this paper is the implementation and performance analysis of the FL methods for NLP tasks. It is now an open problem for both FL and NLP areas, while it is an orthogonal goal for improving the trustworthy of decentralized learning, and it is only possible to study privacy preservation when we have an existing FL+NLP platform. This is also part of our motivation in proposing FedNLP, and we believe our framework provides a set of flexible interfaces for future development to analyze and improve the privacy-preserving ability of FL methods for NLP tasks and beyond.
Personalized FedNLP. From the perspective of the data itself, user-generated text is inherently personalized. Designing personalized algorithms to improve model accuracy or fairness is a very promising direction. In addition, it is also an interesting problem to adapt the heterogeneous model architecture for each client in the FL network. We show that it is feasible to only fine-tune a small amount of the parameters of LMs, so it is promising to adapt recent prefix-tuning methods (Li and Liang 2021) for personalizing the parameters of NLP models within the FedNLP framework.
References
He, C.; Balasubramanian, K.; Ceyani, E.; Rong, Y.; Zhao, P.; Huang, J.; Annavaram, M.; and Avsetimehr, S. 2021. FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks.
Joshi, M.; Choi, E.; Weld, D.; and Zettlemoyer, L. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In *Proc. of ACL.*
Li, Q.; Diao, Y.; Chen, Q.; and He, B. 2021. Federated Learning on Non-IID Data Silos: An Experimental Study. *ArXiv.*
Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. In *Proc. of ICLR.*
Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In *Proc. of EMNLP.*
Appendix for the FedNLP submission
A Motivation Behind FL+NLP
Many realistic NLP services heavily rely on users’ local data (e.g., text messages, documents and their tags, questions and selected answers, etc.), which can be located at either personal devices or larger data-silos for organizations. These local data are usually regarded as highly private and thus not directly accessible by anyone, according to many data privacy regulations; this makes it difficult to train a high-performance model to benefit users. Federated learning aims to solve machine learning under such a privacy-preserving use case, thus offering a novel and promising direction to the community: FL+NLP.
Apart from the goal of learning a shared global model for all clients, FL also provides a new perspective for many other interesting research questions in NLP. One related direction is to develop personalized models for NLP applications, which requires both protection of data privacy and transferred ability on users’ own feature distribution caused by language styles, interested topics and so on. The recent concerns on adversarial attacks and safety issues of NLP models are also highly related to FL+NLP. We thus believe FL+NLP is of vital importance for applying NLP technologies in realistic use cases and could benefit many relevant research areas.
A.1 Challenges of Applying FL in NLP
Given the promising benefits of studying FL+NLP, however, this research direction is currently blocked by the lack of a standardized platform providing fundamental building blocks: benchmark datasets, NLP models, FL methods, evaluation protocols, etc. Most of the current FL platforms either focus on unifying various FL methods and use computer vision models and datasets for their experiments, but lack the ability to connect the study of pre-trained language models, the most popular NLP, and realistic NLP applications of various task formulations.
The first challenge in developing a comprehensive and universal platform for FL+NLP is to deal with various task formulations for realistic NLP applications, which have different input and output formats (Section B). As the non-IID data partition over clients is the major feature of FL problems, it is also a challenge to simulate the realistic non-IID partition for existing NLP datasets (Section 3.2). Finally, a platform also must integrate various FL methods with the Transformer-based NLP models for a variety of task types, and thus a flexible and extensible learning framework is needed. In particular, the conventional trainer component of Transformers now needs to be modified for efficient and safe communications towards federated learning (Section C).
B Basic Formulations of NLP Tasks
There are various types of NLP applications, but many of them share a similar task formulation (i.e., input-and-put formats). We show four common task formulations that can cover most of the mainstream NLP applications: text classification, sequence tagging, question answering, sequence-to-sequence generation.
Text Classification (TC) The input is a sequence of words, \( x = [w_1, w_2, \ldots] \), and the output is a label \( y \) in a fixed set of labels \( \mathcal{L} \). Many NLP applications can be formulated as text classification tasks. For example, we can use TC models for classifying the topic of a news article to be political, sports, entertainment, etc., or analyzing movie reviews to be positive, negative or neutral.
Sequence Tagging (ST) The input is a sequence of words, \( x = [w_1, w_2, \ldots, w_N] \), and the output is a same-length sequence of tags \( y = [t_1, t_2, \ldots, t_N] \), where \( t_i \) is in a fixed set of labels \( \mathcal{L} \). The main difference between TC and ST is that ST learns to classify the label of each token in a sentence, which is particularly useful in analyzing syntactic structures (e.g., part-of-speech analysis, phrase chunking, and word segmentation) and extracting spans (e.g., named entity recognition).
Question Answering (QA) Given a passage \( P = [w_1, w_2, \ldots, w_N] \) and a question \( q \) as input, the task is to locate a span in the passage as the answer to the question. Thus, the output is a pair of token index \( (s, e) \) where \( s, e \in \{1, 2, \ldots, N\} \) for denoting the begin and end of the span in the passage. This particular formulation is also known as reading comprehension.
Natural Language Generation (NLG) Both input and output are sequence of words, \( x = [w_1^1, w_2^1, \ldots, w_N^1] \), \( y = [w_1^2, w_2^2, \ldots, w_M^2] \). It is shared by many realistic applications such as summarization, response generation in dialogue systems, machine translation, etc.
Language Modeling (LM) The left-to-right language modeling task considers a sequence of words as the input \( x = [w_1, w_2, \ldots, w_n] \) and a token \( y = w_{n+1} \) as the output. The output token is expected to be the most plausible next word of the incomplete sentence denoted as \( x \). Although the direct application of LM is limited, a high-performance pre-trained language model can benefit a wide range of NLP applications (as above) via fine-tuning. It also serves as an excellent test bed as it requires no human annotations at all.
Others. There are some other applications that not are covered by the above four basic formulations, and our extensible platform (detailed in Section C) enables users to easily implement their specific tasks. For each task formulation, we show which datasets are used in FedNLP and how we partition them in Section 3.
C The System Design of FedNLP
The FedNLP platform consists of three layers: the application layer, the algorithm layer, and the infrastructure layer. At the application layer, FedNLP provides three modules: data management, model definition, and single-process trainer for all task formats; At the algorithm layer, FedNLP supports various FL algorithms; At the infrastructure layer, FedNLP aims at integrating single-process trainers with a distributed learning system for FL. Specifically, we make
Figure 8: The probability density of quantity of training examples in each of the 100 clients on the 20News dataset with different $\beta$. When $\beta$ is larger, then all clients share more similar numbers of examples; when $\beta$ is smaller, then the range of the quantity is much wider — i.e., the larger differences between clients in terms of their sizes of datasets.
each layer and module perform its own duties and have a high degree of modularization.
C.1 Overall Workflow
The module calling logic flow of the whole framework is shown on the left of Figure 9. When we start the federated training, we first complete the launcher script, device allocation, data loading, model creation, and finally call the API of the federated learning algorithm. This process is expressed in Python-style code (see Alg. 2).
C.2 The Application Layer
Data Management. In data management, What DataManager does is to control the whole workflow from loading data to returning trainable features. To be specific, DataManager is set up for reading h5py data files and driving a preprocessor to convert raw data to features. There are four types of DataManager according to the task definition. Users can customize their own DataManager by inheriting one of the DataManager class, specifying data operation functions, and embedding a particular preprocessor. Note that the raw data’s H5Py file and the non-IID partition file are preprocessed offline, while DataManager only loads them in runtime.
Model Definition. We support two types of models: Transformer and LSTM. For Transformer models, in order to dock with the existing NLP ecology, our framework is compatible with the HuggingFace Transformers library (Wolf et al. 2020), so that various types of Transformers can be directly reused without the need for re-implementation. Specifically, our code is compatible with the three main classes of Tokenizer, Model, and Config in HuggingFace. Users can also customize them based on HuggingFace’s code. Although LSTM has gradually deviated from the mainstream, we still support LSTM to reflect the framework’s integrity, which may meet some particular use cases in federated setting.
NLP Trainer (single process perspective). As for the task-specific NLP Trainer, the most prominent feature is that it does not require users to have any background in distributed computing. Users of FedNLP only need to complete single-process code writing. A user should inherit the Trainer class in the application layer to implement the four methods as shown in the figure: 1. the get_model_params() interface allows the algorithm layer to obtain model parameters and transmit them to the server; 2. the set_model_params() interface obtains the updated model from the server’s aggregation and then updates the model parameters of the local model; 3. the programming of the train() and test() function only needs to consider the data of a single user, meaning that the trainer is completely consistent with the centralized training.
C.3 The Algorithm Layer
In the design of the algorithm layer, we follow the principle of one-line API. The parameters of the API include model, data, and single-process trainer (as shown in Algorithm 2). The algorithms we support include:
Centralized Training. We concatenate all client datasets and use the global data $D_G$ to train a global model — i.e., the conventional protocol for learning a NLP model on a dataset.
FedAvg (McMahan et al. 2017a) is the de facto method for federated learning, assuming both client and server use the SGD optimizer for updating model weights.
Algorithm 2: The FedNLP Workflow
```python
# using text classification (TC) as an example
# initialize distributed computing environment
process_id, ... = FedNLP_init()
# GPU device management
device = map_process_to_gpu(process_id, ...)
# data management
data_manager = TCDDataManager(process_id, ...)
data_dict = dm.load_federated_data(process_id)
# create model by specifying the task
client_model, ... = create_model(model_args, formulation="classification")
# define a customized NLP Trainer
client_trainer = TCTrainer(device, client_model, ...)
# launch the federated training (e.g., FedAvg)
FedAvg_distributed(..., device, client_model, data_dict, ..., client_trainer)
```
FedProx (Li et al. 2020c) can tackle statistical heterogeneity by restricting the local model updates to be closer to the initial (global) model with L2 regularization for better stability in training.
FedOPT (Reddi et al. 2020) is a generalized version of FedAvg. There are two gradient-based optimizers in the algorithm: ClientOpt and ServerOpt (please refer to the pseudo code in the original paper (Reddi et al. 2020)). While ClientOpt is used to update the local models, ServerOpt treats the negative of aggregated local changes \(-\Delta(t)\) as a pseudo-gradient and applies it on the global model. In our FedNLP framework, by default, we set the ClientOpt to be AdamW (Loshchilov and Hutter 2019) and the ServerOpt to be SGD with momentum (0.9) and fix server learning rate as 1.0.
Each algorithm includes two core objects, ServerManager and ClientManager, which integrate the communication module ComManager from the infrastructure layer and the Trainer of the training engine to complete the distributed algorithm protocol and edge training. Note that users can customize the Trainer by passing a customized Trainer through the algorithm API.
**C.4 The Infrastructure Layer**
The infrastructure layer includes three modules:
1) Users can write distributed scripts to manage GPU resource allocation. In particular, FedNLP provides the GPU assignment API (map_process_to_gpu() in Algorithm 2) to assign specific GPUs to different FL Clients.
2) The algorithm layer can use a unified and abstract ComManager to complete a complex algorithmic communication protocol. Currently, we support MPI (Message Passing Interface), RPC (Remote procedure call), and MQTT (Message Queuing Telemetry Transport) communication backend. MPI meets the distributed training needs in a single cluster; RPC meets the communication needs of cross-data centers (e.g., cross-silo federated learning); MQTT can meet the communication needs of smartphones or IoT devices.
3) The third part is the training engine, which reuses the existing deep learning training engines by presenting as the Trainer class. Our current version of this module is built on PyTorch, but it can easily support frameworks such as TensorFlow. In the future, we may consider supporting the lightweight edge training engine optimized by the compiler.
C.5 Enhancing Security with Secure Aggregation (SA)
FedNLP supports state-of-the-art SA algorithms LightSecAgg, SecAgg (Bonawitz et al. 2017), and SecAgg+ (Bell et al. 2020). Here, we provide a short comparison of these three algorithms. In general, LightSecAgg provides the same model privacy guarantees as SecAgg (Bonawitz et al. 2017) and SecAgg+ (Bell et al. 2020) while substantially reducing the aggregation (hence run-time) complexity (Figure 10). The main idea of LightSecAgg is that each user protects its local model using a locally generated random mask. This mask is then encoded and shared to other users, in such a way that the aggregate mask of any sufficiently large set of surviving users can be directly reconstructed at the server. Our main effort in FedNLP is integrating these algorithms, optimizing its system performance, and designing user-friendly APIs to make it compatible with NLP models and FL algorithms. The performance analysis is shown in Figure 10. Figure 10(a) shows the performance when the model training does not run in parallel with encoding/decoding operations, while Figure 10(b) shows the performance when the model training overlaps with encoding/decoding operations.
D Implementation Details
Non-IID. Label Distribution Note that this might cause a few clients not to have enough examples to sample for particular labels if they are already used up. Prior works choose to stop assigning early and remove such clients, but it consequently loses the other unused examples and also causes the inconsistency of client numbers. Thus, to avoid these issues, we propose a dynamic reassigning method which complement the vacancy of a label by filling in the examples of other labels based on their current ratio of remaining unassigned examples.
E More Related Works
Federated Learning Methods. Federated Learning (FL) is a widely disciplinary research area that mainly focuses on three aspects: statistical challenge, trustworthiness, and system optimization. Numerous methods have been proposed to solve statistical challenges, including FedAvg (McMahan et al. 2017b), FedProx (Li et al. 2020c), FedOPT (Reddi et al. 2020), FedNAS (He, Annavaram, and Avestimehr 2020a; He et al. 2020b), and FedMA (Wang et al. 2020b) that alleviate the non-IID issue with distributed optimization, and new formulations, MOCHA (Smith et al. 2017), pFedMe (Dinh, Tran, and Nguyen 2020), perFedAvg (Fallah, Mokhtari, and Ozdaglar 2020), and Ditto (Li et al. 2020a), that consider personalization and fairness in federated training.
For trustworthiness, security and privacy are the two main research directions that are mainly concerned with resisting data or model attacks, reconstruction, and leakage during training (So, Güler, and Avestimehr 2021b,a, 2020; Prakash et al. 2020; Wang et al. 2020a; Lyu et al. 2020). Given that modern deep neural networks are over-parameterized and dominate nearly all learning tasks, researchers also proposed algorithms or systems to improve the efficiency and scalability of edge training (He, Annavaram, and Avestimehr 2020b; He et al. 2020a, 2019, 2021). We refer readers to the canonical survey (Kairouz et al. 2019) for details.
Although tremendous progress has been made in the past few years, these algorithms or systems have not been fully evaluated on realistic NLP tasks introduced in this paper.
|
{"Source-Url": "https://yuchenlin.xyz/files/fednlp.pdf", "len_cl100k_base": 11312, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 50416, "total-output-tokens": 16006, "length": "2e13", "weborganizer": {"__label__adult": 0.0005774497985839844, "__label__art_design": 0.0008363723754882812, "__label__crime_law": 0.0007610321044921875, "__label__education_jobs": 0.0062103271484375, "__label__entertainment": 0.0003800392150878906, "__label__fashion_beauty": 0.0003490447998046875, "__label__finance_business": 0.0008044242858886719, "__label__food_dining": 0.0004646778106689453, "__label__games": 0.00151824951171875, "__label__hardware": 0.000957965850830078, "__label__health": 0.00141143798828125, "__label__history": 0.0005297660827636719, "__label__home_hobbies": 0.0001537799835205078, "__label__industrial": 0.0006890296936035156, "__label__literature": 0.0012607574462890625, "__label__politics": 0.0006651878356933594, "__label__religion": 0.0008325576782226562, "__label__science_tech": 0.353515625, "__label__social_life": 0.0002694129943847656, "__label__software": 0.036468505859375, "__label__software_dev": 0.58984375, "__label__sports_fitness": 0.00046133995056152344, "__label__transportation": 0.0005984306335449219, "__label__travel": 0.0002982616424560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61497, 0.02505]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61497, 0.29401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61497, 0.85934]], "google_gemma-3-12b-it_contains_pii": [[0, 3728, false], [3728, 10141, null], [10141, 15044, null], [15044, 20154, null], [20154, 23934, null], [23934, 26949, null], [26949, 31404, null], [31404, 37021, null], [37021, 42592, null], [42592, 45444, null], [45444, 51528, null], [51528, 55813, null], [55813, 58127, null], [58127, 61497, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3728, true], [3728, 10141, null], [10141, 15044, null], [15044, 20154, null], [20154, 23934, null], [23934, 26949, null], [26949, 31404, null], [31404, 37021, null], [37021, 42592, null], [42592, 45444, null], [45444, 51528, null], [51528, 55813, null], [55813, 58127, null], [58127, 61497, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61497, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61497, null]], "pdf_page_numbers": [[0, 3728, 1], [3728, 10141, 2], [10141, 15044, 3], [15044, 20154, 4], [20154, 23934, 5], [23934, 26949, 6], [26949, 31404, 7], [31404, 37021, 8], [37021, 42592, 9], [42592, 45444, 10], [45444, 51528, 11], [51528, 55813, 12], [55813, 58127, 13], [58127, 61497, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61497, 0.07265]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
c619d9b1df341893cee81aa92459ecea9180d11b
|
Knowledge Representation on the Web revisited: the Case for Prototypes
Michael Cochez\textsuperscript{1,2,4}, Stefan Decker\textsuperscript{1,2}, and Eric Prud’hommeaux\textsuperscript{3}
\textsuperscript{1} Fraunhofer Institute for Applied Information Technology FIT
DE-53754 Sankt Augustin, Germany
\{stefan.decker,michael.cochez\}@fit.fraunhofer.de
\textsuperscript{2} RWTH Aachen University, Informatik 5
DE-52056 Aachen, Germany
\textsuperscript{3} World Wide Web Consortium (W3C)
Stata Center, MIT
eric@w3.org
\textsuperscript{4} University of Jyvaskyla,
Department of Mathematical Information Technology
FI-40014 University of Jyvaskyla, Finland
Abstract. Recently, RDF and OWL have become the most common knowledge representation languages in use on the Web, propelled by the recommendation of the W3C. In this paper we examine an alternative way to represent knowledge based on Prototypes. This Prototype-based representation has different properties, which we argue to be more suitable for data sharing and reuse on the Web. Prototypes avoid the distinction between classes and instances and provide a means for object-based data sharing and reuse.
In this paper we discuss the requirements and design principles for Knowledge Representation based on Prototypes on the Web, after which we propose a formal syntax and semantics. We further show how to embed knowledge representation based on Prototypes in the current Semantic Web stack and report on an implementation and practical evaluation of the system.
Keywords: Linked Data, Knowledge Representation, Prototypes
1 Introduction and Motivation
In earlier days of Knowledge Representation, Frames [19,20] and Semantic Networks [23] were accepted methods of representing static knowledge. These had no formal semantics but subsequent works (e.g., KL-ONE [2]) introduced reasoning with concepts, roles, and inheritance, culminating in Hayes’s 1979 [10] formalization of Frames. This formalization included instances formalized as elements of a domain (individuals) and classes (or concepts) as sets in a domain (unary predicates). This formalization was subsequently used as a basis for Description Logics (DL) and the investigation of expressiveness vs. tractability [16], which lead to Description Logic systems and reasoners such as SHIQ [11] and FaCT [12]. Finally,
the Semantic Web effort led to the combination of Description Logics with Web Technologies such as RDF [6], which subsequently evolved into the Web Ontology Language OWL [13]. However, the formalization of Frames only covered some modeling primitives which were in use at the time. Specifically Prototype-based systems, which do not make a distinction between instances and classes, did not get much attention for knowledge representation (cf. Karp [15]). Exceptions exist, for instance, THEO [21], which is a Frame Knowledge Representation System deviating from the — now common — instance–class distinction by using only one type of frame, with the authors arguing that the distinction between instances and classes is not always well defined. Also several programming Languages based on prototypes were successfully developed (SELF [27], JavaScript [8] and others), but the notion of Prototypes as a Knowledge Representation mechanism was not formalized and remained unused in further developments. As noted in [24], these knowledge representation mechanisms may now be again relevant for applications.
In this paper we develop a syntax and formal semantics for a language based on prototypes for the purpose of enabling knowledge representation and knowledge sharing on the Web. We argue that such a system has distinctive advantages compared to other representation languages.
This paper is augmented by a separate technical report in which we detail the software which we wrote to support prototype knowledge representation. [3] The report also includes experiments which show how the system performs in a web environment.
2 A Linked Prototype Layer on the Web
2.1 Idea and Vision
Tim Berners-Lee stated the motivation for creating the Web as:
The dream behind the Web is of a common information space in which we communicate by sharing information.
We aim to optimize the sharing and reuse of structured data. Currently, on the Semantic Web, this sharing is typically achieved by either querying a SPARQL endpoint or downloading a graph or an ontology. We call this vertical sharing: top-down sharing where a central authority or institution shares an ontology or graphs. We would like to enable horizontal sharing: sharing between peers where individual pieces of instance data can be used and reused. Note that this mode of sharing appears much closer to the intended spirit of the Web. Languages like OWL evolved driven by the AI goal of intelligent behavior and sound logical reasoning [14]. They don’t emphasize or enable horizontal sharing - the sharing and reuse of individual objects in a distributed environment. Rather, their goal is to represent axioms and enable machines to reason. Imagine a prototype, for example, an Oil Painting with properties and values for those properties, that lives at a particular addressable location on the Web. This prototype Oil Painting can be reused in a number of different ways (see Figure 1):
https://www.w3.org/People/Berners-Lee/ShortHistory.html
First, by specializing the Oil Painting prototype (i.e., using it as a template by linking to it), and either specializing or changing its properties. For example, whereas the Oil Painting has a value Canvas for its surface property, the Arnolfini Portrait prototype has the value Oak Panel. However, the value for the creator property (Jan van Eyck) remains the same. To accomplish this, current Semantic Web infrastructure would require one to copy the initial object to a new object before changing its properties. Note, however, that this also means that the newly created object loses its heritage, meaning that it will not receive any updates which are made to object in the inheritance chain later on.
Second, by either directly or indirectly referring to it as a value of a property. For instance, in Figure 1 the prototype National Gallery has a property displays, which links to the prototype Arnolfini Portrait, which is based on the Oil Painting prototype. This usage of entities is currently also possible using RDF. (But, see also the discussion in section 4.3.)
These two ways to reusing objects on the Web create a distributed network of interlinked objects, requiring horizontal as well as vertical sharing:
- Vertical sharing is enabled by specializing an object or prototype. The prototype that is being specialized defines the vocabulary and structure for the new object, realizing the task of ontologies. For example, a museum can publish a collection of prototypes that describe the types of artifacts on display (e.g., Oil Painting), which can then be used to describe more specific objects.
- Horizontal sharing is enabled by reusing prototypes and only changing specific attributes or linking to other prototypes as attribute values. For example, a specific oil painting by painter Jan van Eyck can be used as a template by describing how other oil paintings differ from it, or a specific oil painting can be the attribute value for the National Gallery prototype. This creates a network of prototypes across the Web.

Fig. 1: The figure shows three prototypes and different relations between them. The Arnolfini Portrait is a specialization of the Oil Painting but also displayed at the National Gallery, London.
2.2 Requirements
In the previous section, we presented a vision for a prototype layer on the Web. In this section we discuss requirements for the linked prototype layer. Some of these requirements are based on actual tasks that user communities want to perform while others are based on desirable principles of the World Wide Web.
The linked prototype layer must primarily enable sharing and reuse of knowledge. Sharing and reuse of knowledge requires an explicit distributed network of entities. In particular we desire means to share vertically (i.e., provide a central vocabulary or ontology that many can refer to) and share horizontally (i.e., provide concrete reusable entities). Further, it must be possible for the knowledge to evolve over time and anyone should be able to define parts of the network. This implies that central authority should be avoided as much as possible. Preferably, the realization of the prototype layer should be achieved using facilities which the Semantic Web already provides, such as RDF and IRIs, in order to leverage existing data resources. Finally, the designed system should still retain a certain level of familiarity.
2.3 Design Principles
While designing the prototype-based system, we were inspired by design principles, such as the KISS Principle (as defined in [28]), and worse-is-better (as coined by R.P. Gabriel [9]). On the intersection between these principles lies the idea of simplicity. The worse-is-better approach encourages dropping parts of the design that would cause complexity or inconsistency.
Our goal was explicitly not to enable sophisticated reasoning, but rather provide a simple object or prototype layer for the Web.
We use the idea of prototypes as suggested in early Frame Systems [15] as well as in current programming languages such as Javascript [8]. Prototypes fulfill the requirements to support the reusability and horizontal shareability since it is possible to just refer to an existing prototype that exists elsewhere on the Web, ensuring horizontal shareability. Furthermore a collection of prototypes published by an authority can still serve the function of a central ontology, ensuring vertical shareability.
3 Prototypes
In this section we introduce our approach for knowledge representation on the web, based on prototypes. First, we provide an informal overview of the approach, illustrating the main concepts. Then we introduce a formal syntax and semantics.
3.1 Informal Presentation
To illustrate the prototype system we use an example about two Early Netherlandish painters, the brothers van Eyck. First, we look at a simple representation
of the *Arnolfini Portrait* in fig. 2. This figure contains the prototype of the portrait which is derived from the empty prototype ($P_0$, see section 3.2) and has two properties. The first property is $\text{dc:creator}$ and has value *Jan van Eyck*. The second property describes the format of the artwork. We also display the example using a concrete syntax.
Fig. 2: The prototype representation of the Arnolfini Portrait
Next we will start making use of the prototype nature of the representation. Starting from the Arnolfini Portrait, we derive the *Ghent Alterpiece*. This painting was created by the same painter, but also his brother *Hubert van Eyck* was involved in the creation of the work. Figure 3 illustrates how this inheritance works in practice; we create a prototype for the second work and indicate that its base is the first one (using the big open arrow). Then, we add a property asserting that the other brother is also a creator of the work. The resulting prototype has the properties we defined directly as well as those inherited from its base.
Fig. 3: Deriving the prototype representation of the Ghent Altarpiece from the Arnolfini Portrait
---
6 In the illustrations, we loosely write identifiers like *Arnolfini Portrait* for prototypes, properties and their values. However, the proposed systems requires the use of IRIs for identifiers, just like RDF. The concrete syntax examples reflect this. Note that our syntax does not support prefixes as supported by RDF Turtle syntax. If we write $\text{dc:creator}$ we mean an IRI with scheme dc.
7 For illustrative purposes we use different graphical shapes for the prototypes under consideration and the values of their properties. However, as will become clear in the sections below, all values are themselves prototypes.
Often, there will be a case where the base prototype has properties which are not correct for the derived prototype. In the example shown in fig. 4 we added the `example:location` property to the Arnolfini Portrait with the value **National Gallery, London**. The Ghent Altarpiece is, however, located in the **Saint Bavo Cathedral, Ghent**. Hence, we first remove the `example:location` property from the Arnolfini Portrait before we add the correct location to the second painting. In effect, the resulting prototype inherits the properties of its base, can remove unneeded ones, and add its own properties as needed.
Another way to arrive at the same final state would be to derive from a base without any properties and add all the properties needed. The predefined empty prototype (**proto:P_0**) has no properties. All other prototypes derive from an already existing prototype; circular derivation is not permitted. Now, we will let the prototype which we are creating derive directly from the empty prototype and add properties. This flattening of inherited properties produces the prototype's *fixpoint*. The fixpoint of the prototype created in fig. 4 can be found in fig. 5.
 
**Fig. 4:** Removing properties while deriving the Ghent Altarpiece from the Arnolfini Portrait
 
**Fig. 5:** The result of removing properties while deriving the Ghent Altarpiece from the Arnolfini Portrait.
In the proposed system we apply the closed world and the unique name assumptions. If the system used the open world assumption and one would ask whether the Arnolfini Portrait is located in Beijing, the system would only be able to answer that it does not know. In a closed world setting, the system will answer that the painting is not in Beijing. This conclusion is not based on the fact that the system sees that the painting is located in England, but because of the fact that there is no indication that it would be in Beijing. Under the non-unique name assumption, the system would not be able to answer how many paintings it knows about. Instead, it would only be able to tell that there are one or more. Without the unique name assumption, the resource names Ghent Altarpiece and Arnolfini Portrait may refer to the same real-world instance.
3.2 Formal presentation
The goal of this section is to give a formal presentation of the concepts discussed in the previous section. We separate the formal definition into two parts. First, we define the syntax of our prototype language. Then, we present the semantic interpretation and a couple of definitions which we used informally above.
Prototype Syntax In this section we define the formal syntax of prototype-based knowledge bases. We define a set of syntactic material first, before we define the language.
**Definition 1 (Prototype Expressions).** Let $ID$ be a set of absolute IRIs according to RFC 3987 [7] without the IRI $\text{proto:}P_0$. The IRI $\text{proto:}P_0$ is the empty prototype and will be denoted as $P_0$. We define expressions as follows:
- Let $p \in ID$ and $r_1, \ldots, r_m \in ID$ with $1 \leq m$. An expression $(p, \{r_1, \ldots, r_m\})$ or $(p, *)$ is called a simple change expression. $p$ is called the simple change expression ID, or its property. The set $\{r_1, \ldots, r_m\}$ or $*$ are called the values of the simple change expression.
- Let $id \in ID$ and $base \in ID \cup P_0$ and add and remove be two sets of simple change expressions (called change expressions) such that each simple change expression ID occurs at most once in each of the add and remove sets and $*$ does not occur in the add set. An expression $(id, (base, add, remove))$ is called a prototype expression. $id$ is called the prototype expression ID.
Let $\text{PROTO}$ be the set of all prototype expressions. The tuple $\{P_0, ID, \text{PROTO}\}$ is called the Prototype Language.
Informally, a prototype expression contains the parts of a prototype which we introduced in the previous subsection. It has an id, a base (a reference to the prototype it derives from), and a description of the properties which are added and removed.
As an example, we could write down the example of fig. 4 using this syntax. The prototype expression of the Arnolfini Portrait would look like this:
\[
\text{(example:Arnolfini_Portrait,(proto:P_0,}
\text{(dc:creator,\{example:Jan_Van_Eyck\}),}
\text{(dc:format,\{example:Painting\}),}
\text{(example:location,\{example:National_Gallery\}),}
\text{∅))}
\]
The prototype for the Altarpiece would be written down as follows:
\[
\text{(example:Ghent_Altarpiece,(example:Arnolfini_Portrait,}
\text{(dc:creator,\{example:Hubert_Van_Eyck\}),}
\text{(example:location,\{example:Saint_Bavo\}),}
\text{\{example:location,*\}))}
\]
This syntax is trivially transformable into the concrete syntax which we used in fig. 4b and the other examples in the previous subsection.
**Definition 2 (dom).** The domain of a finite subset \( S \subseteq \text{PROTO} \), i.e., \( \text{dom}(S) \) is the set of the prototype expression IDs of all prototype expressions in \( S \).
**Definition 3 (Grounded).** Let \( PL = (P_∅, ID, \text{PROTO}) \) be the Prototype Language. Let \( S \subseteq \text{PROTO} \) be a finite subset of \( \text{PROTO} \). The set \( G \) is defined as:
1. \( P_∅ \in G \)
2. If there is a prototype \( (id,(base,add,remove)) \in S \) and \( base \in G \) then \( id \in G \).
3. \( G \) is the smallest set satisfying (1) and (2).
\( S \) is called grounded iff \( G = \text{dom}(S) \cup \{P_∅\} \). This condition ensures that all prototypes derive (recursively) from \( P_∅ \) and hence ensures that no cycles occur.
To illustrate how cycles are avoided by this definition, imagine that \( S = \{(A,(P_∅,∅,∅)),(B,(C,∅,∅)),(C,(B,∅,∅)),\} \). What we see is that there is a cycle between B and C. If we now construct the set \( G \), we get \( G = \{P_∅,A\} \) while \( \text{dom}(S) \cup \{P_∅\} = \{A,B,C,P_∅\} \), and hence the condition for being grounded is not fulfilled.
**Definition 4 (Prototype Knowledge Base).** Let \( PL = (P_∅, ID, \text{PROTO}) \) be the Prototype Language. Let \( KB \subseteq \text{PROTO} \) be a finite subset of \( \text{PROTO} \). \( KB \) is called a Prototype Knowledge Base iff 1) \( KB \) is grounded, 2) no two prototype expressions in \( KB \) have the same prototype expression ID, and 3) for each prototype expression \( (id,(base,add,remove)) \in KB \), each of the values of the simple change expressions in \( add \) are also in \( \text{dom}(KB) \).
**Definition 5 (R).** Let \( KB \) be a prototype knowledge base and \( id \in ID \). Then, the resolve function \( R \) is defined as: \( R(KB,id) = \) the prototype expression in \( KB \) which has prototype expression ID equal to \( id \).
Prototype Semantics
Definition 6 (Prototype-Structure). Let $SID$ be a set of identifiers. A tuple $pv = (p, \{v_1, \ldots, v_n\})$ with $p, v_i \in SID$ is called a Value-Space for the ID-Space $SID$. A tuple $o = (id, \{pv_1, \ldots, pv_m\})$ with $id \in SID$ and Value-Spaces $pv_i, 1 \leq i \leq m$ for the ID-Space $SID$ is called a Prototype for the ID-Space $SID$. A Prototype-Structure $O = (SID, OB, I)$ for a Prototype Language $PL$ consists of an ID-Space $SID$, a Prototype-Space $OB$ consisting of all Prototypes for the ID-Space $SID$ and an interpretation function $I$, which maps IDs from $PL$ to elements of $SID$.
Definition 7 (Herbrand-Interpretation).
Let $O = (SID, OB, I_h)$ be a Prototype-Structure for the prototype language $PL = (P, ID, PROTO)$. $I_h$ is called a Herbrand-Interpretation if $I_h$ maps every element of $ID$ to exactly one distinct element of $SID$.
As per the usual convention used for Herbrand-Interpretations, we assume that $ID$ and $SID$ are identical.
Next, we define the meaning of the constituents of a prototype. We start with the interpretation functions $I_s$ and $I_c$ which give the semantic meaning of the syntax symbols related to change expressions. These functions (and some of the following ones) are parametrized (one might say contextualized) by the knowledge base. This is needed to link the prototypes together.
Definition 8 ($I_s$). Interpretation for the values of a simple change expression. Let $KB$ be a prototype knowledge base and $v$ the values of a simple change expression. Then, the interpretation for the values of the simple change expression $I_s(KB, v)$ is a subset of $SID$ defined as follows:
$SID$, if $v = *$
$
\{I_h(r_1), I_h(r_2), \ldots, I_h(r_n)\}$, if $v = \{r_1, \ldots, r_n\}$
Definition 9 ($I_c$). Interpretation of a change expression. Let $KB$ be a prototype knowledge base and a function $ce = \{(p_1, vs_1), (p_2, vs_2), \ldots\}$ be a change expression with $p_1, p_2, \ldots \in ID$ and the $vs_i$ be values of the simple change expressions. Let $W = ID \setminus \{p_1, p_2, \ldots\}$. Then, the interpretation of the change expression $I_c(KB, ce)$ is a function defined as follows (We will refer to this interpretation as a change set, note that this set defines a function):
$\{(I_h(p_1), I_s(KB, vs_1)), (I_h(p_2), I_s(KB, vs_2)), \ldots\} \cup \bigcup_{w \in W} \{(I_h(w), \emptyset)\}$
Next, we define $J$ which defines what it means for a prototype to have a property.
Definition 10 ($J$). The value for a property of a prototype. Let $KB$ be a prototype knowledge base and $id, p \in ID$. Let $R(KB, id) = (id, (b, r, a))$ (the
resolve function applied to $id$). Then the value for the property $p$ of the prototype $id$, i.e., $J(KB, id, p)$ is:
$$I_c(KB, a)(I_h(p)), \text{ if } b = P_\emptyset$$
$$(J(KB, b, p) \setminus I_c(KB, r)(I_h(p))) \cup I_c(KB, a)(I_h(p)), \text{ otherwise}$$
Informally, this function maps a prototype and a property to 1) the set of values defined for this property in the base of the prototype 2) minus what is in the remove set 3) plus what is in the add set.
As an example, let us try to find out what the value for the creator of the Ghent Altarpiece described in the example of the previous subsection would evaluate to assuming that these prototypes were part of a Prototype Knowledge Base $KB$. For brevity we will write example:Ghent_Altarpiece as GA, example:Arnolfini_Portrait as AP, dc:creator as creator, example:Jan_Van_Eyck as JVE, and example:Hubert_Van_Eyck as HVE.
Concretely, we have to evaluate $J(KB, GA, creator) = (J(KB, AP, creator) \setminus I_c(KB, \emptyset)(creator)) \cup I_c(KB, add)(creator)$ where $add$ is the add change set of the GA prototype expression. First we compute the recursive part, $J(KB, AP, creator) = I_c(KB, add_{ap})(creator) = \{(creator, \{JVE\}), \ldots \}(creator) = \{JVE\}$. Where $add_{ap}$ is the add change set of the AP prototype expression. The second part (what is removed) becomes $I_c(KB, \emptyset)(creator) = \emptyset$. The final part (what this prototype is adding) becomes $I_c(KB, add)(creator) = \{(creator, \{HVE\}), \ldots \}(creator) = \{HVE\}$. Hence, the original expression becomes $\{(JVE) \setminus \emptyset \} \cup \{HVE\} = \{JVE, HVE\}$ as expected.
**Definition 11 (FP).** The interpretation of a prototype expression is also called its fixpoint. Let $pe = (id, (base, add, remove)) \in KB$ be a prototype expression. Then the interpretation of the prototype expression in context of the prototype knowledge base $KB$ is defined as $FP(KB, pe) = (I_h(id), \{(I_h(p), J(KB, id, p)) | p \in ID, J(KB, id, p) \neq \emptyset\})$, which is a Prototype.
**Definition 12 (I_{KB}:Interpretation of Knowledge Base).** Let $O = (SID, OB, I_h)$ be a Prototype-Structure for the Prototype Language $PL = (P_\emptyset, ID, PROTO)$ with $I_h$ being a Herbrand-Interpretation. Let $KB$ be a Prototype-Knowledge Base. An interpretation $I_{KB}$ for $KB$ is a function that maps elements of $KB$ to elements of $OB$ as follows: $I_{KB}(KB, pe) = FP(KB, pe)$
This concludes the definition of the syntactic structures and semantics of prototypes and prototype knowledge bases. For the semantics, we have adopted Herbrand-Interpretations, which are compatible with the way RDF is handled in SPARQL.
4 Inheritance
Our discussion of inheritance is based on the work by Lieberman [17], Cook et al. [5], de la Rocque Rodriguez [25], and Taivalsaari [26]. The combination of these
works provides a wide overview of different forms of inheritance. Despite the fact that the focus of these works is on object oriented programming (OOP) we chose them because prototype-based systems are much more developed in OOP than in knowledge representation. Many of the OOP concepts and concerns also apply to how inheritance mechanisms can be applied in Knowledge Representation.
Broadly speaking, inheritance means that an entity receives properties from another one because of a relation between the two. Two types of inheritance are common: class-based and prototype-based. In class-based systems there is a distinction between objects and classes. An object is an instantiation of a class or, as some say, a class is a blueprint for an object. A new class can be inherited from another one and will typically inherit all properties and methods from the base or parent class. The values associated with these properties are typically defined in the context of the instances. Prototype-based systems on the other hand only have one type of things: prototypes. A new prototype can be made by cloning an existing prototype (i.e., the base). The freshly created object now inherits from the earlier defined one and the values are defined directly on the prototypes. As we argued above, we chose the prototype-based inheritance to allow for both horizontal and vertical sharing. In the next sections we will describe the consequences of the choice of property based inheritance.
4.1 Prototype Dynamics
There are essentially two ways to achieve prototype-based inheritance. The first one, concatenation, would copy all the content from the original object to the newly created one and apply the needed changes to the copy. The second one, delegation, keeps a reference to the original object and only stores the changes needed in the newly created object. We decided to follow the second option (for now) because it more closely resembles what one would expect from a system on the web. Instead of centralizing all information into one place, one links to information made available by others. This type of inheritance makes it possible to automatically make use of enhancements made in the base prototypes. Furthermore, the option of making a copy of the object one extends from is still available; we will discuss this further in section 4.3. Note that this is also a space-time trade-off. Copying will occupy more space, but make look-up faster while delegation will be slower, but only the parts which have been changed have to be stored. Another option is to get parts of both worlds by caching frequently used prototypes for a set amount of time. In this case, one may retrieve outdated values. In our technical report [3], we describe a possible approach towards caching using existing HTTP mechanisms.
When parts of a knowledge base are not in the control of the knowledge engineer who is adding new information, it might be tempting to recreate certain prototypes to make sure that the prototypes one is referring to do not change over time, rendering the newly added information invalid.
4.2 A Prototype is-not-a Class
In class-based object oriented languages, deriving a class $A$ from a base class $B$ usually implies that an instance of $A$ can be used wherever an object of type $B$ is expected. In other words, the objects instantiated from the classes $A$ and $B$ follow the Liskov substitution principle [18]. Since class-based object-orientation is currently most common in popular programming languages, one might be tempted to emulate classes in a prototype-based language. Imagine, for instance, that we want to create a prototype `employee` to represent an employee of a company. One might be tempted to give this `employee` a property `name`, with some default value since all employees will have a name in the end. However, this is not necessary, or even desired, when working with prototype-based systems. Instead, the `employee` should only have properties with values which all or most employees have in common, like for example the company they work for. Any more specific properties should instead be put on the employees themselves. Moreover, the fact that a prototype derives from the created `employee` does not have any implication beyond the inherited properties. Put another way, there is no `is-a` relation between a concrete employee and the `employee` prototype from which it was derived. This is also clearly visible from the fact that a derived prototype has the ability to remove properties from the base. Moreover, any other prototype with the properties needed to qualify for being an employee can be seen as an employee; independently from whether it derives from the `employee` prototype or not. Next, we will discuss what it means to be ‘seen’ as an employee.
4.3 Object boundaries
Applications usually need to work with data with predictable properties. For instance, the employees from the example in the previous section need to have a name, gender, birthday, department, and social security number in order for the application to work. Hence, there is a need to specify the properties a prototype needs to have in order to be used for a specific application. This idea is not new and has also been identified in other knowledge representation research. Named Graphs are often used for this purpose, but they don’t capture shared ownership or inheritance. Further, resource shapes\(^8\) and shape expressions [22] have the core idea of determining whether a given RDF graph complies with a specification. The main goal of these is checking some form of constraints, but they could as well be used to identify instances in a dataset.
This need has been identified in many places in OOP literature. An object oriented programming language which allows variables to contain any object which fulfills a given interface definition is said to have a structural type system. Recent examples of programming languages with such type system include OCaml and Go, but to our knowledge the first programming language to use it was Emerald [1] and later School [25]. In these languages, if objects have a given set of operations (according to what they called an abstract type in Emerald, type
\(^8\) [https://www.w3.org/Submission/2014/SUBM-shapes-20140211/]
in School, or interface in Go), they would be treated as an instance of, or assignable to, a variable of that type.
One of the arguments against structural type systems is that it might happen that an object has the properties (or methods) of the type by accident. We can envision this happening in OOP because the names of methods have little semantic meaning connected to them (does the write() method write something to the disk or to the printer?). However, in a Semantic Web setting, the property names are themselves IRIs and chosen carefully not to clash with existing names (a http://xmlns.com/foaf/0.1/workplaceHomepage will always be ‘The workplaceHomepage of a person is a document that is the homepage of an organization that they work for.’). In other words the property names in the system under consideration in this paper do in principle not suffer from this problem.
5 Future Work
Since most past work in the research community has been focused on class-based knowledge representation, there are still many areas unexplored related to prototype-based knowledge representation on the web.
5.1 Relation to RDF and OWL
In this paper, we are suggesting a knowledge sharing language based on prototypes. Future work will need to investigate how to layer the prototype language on top of RDF. While most of this conversion and layering should be straightforward (e.g., the IRI of a prototype expression would also be the IRI of the RDF resource), some challenges remain. For example, one would need to define a protocol working on RDF graphs in order to locate and interact with a prototype. However, we believe that these challenges can be overcome.
5.2 A Hint of Class?
In this paper, we presented prototypes as a possible alternative to class-based systems such as OWL for Knowledge representation on the Web - at least for the purpose of scalable Knowledge Sharing. However, both ways - prototypes and class-based representations, have different use cases and reasons to exist: OWL is focusing on enabling reasoning whereas prototypes are focusing on enabling Knowledge Sharing. Exploring the exact boundaries of their respective use cases still remains a topic for future work.
Another interesting future research path would be the discovery of ‘hidden’ classes in the knowledge base. A hidden class would be formed by a group of objects with similar characteristics. These classes would be automatically discovered, perhaps with techniques like Formal Concept Analysis (FCA) [29], by
9 definition of foaf:workplaceHomepage from http://xmlns.com/foaf/spec/
collecting a large number of prototypes from the Web. Another approach to this would be to perform a hierarchical clustering of the prototypes with a scalable technique as proposed in [4]. After this clustering, it might be possible to extract a class hierarchy from the generated dendrogram.
### 5.3 Variations, Evaluations and Large Scale Benchmarks
The prototype system introduced in this paper is only an initial exploration. There are numerous variations possible by making different choices for the inheritance model (e.g. concatenation, multiple inheritance, etc.), the allowed values (intervals, literals, etc.), and solutions for resolving the values for non-local prototypes. These choices will have different implications for implementations and good evaluation metrics and large scale benchmarks should be designed to compare them. We presented initial work in this direction in a technical report [3] and publicly available software [https://github.com/miselico/knowledgebase](https://github.com/miselico/knowledgebase) (LGPLv3). We benchmarked the system using several synthetic data sets and observed that the theoretical model presented offers the scalability needed for use in production environment in a typical distributed web architecture.
### 6 Conclusions
During the last decade, Knowledge Representation (KR) research has been dominated by W3C standards whose development was influenced by the state of the mind that researchers in the involved research communities had at the time of creation. Several choices which were made which have far reaching consequences on the way knowledge representation is done on the Web today.
In this paper we tried to take a step back and investigate another option for KR which, in our opinion, has properties more suitable to deliver on the goals of horizontal and vertical sharing. Concretely, we introduced a system in which everything is represented by what we call prototypes and the relations between them. Prototypes enable both vertical sharing by the inheritance mechanism and horizontal sharing by direct reference to any prototype. We provided a possible syntax and semantics for the Prototype system and performed experiments with an implementation. The experiments showed that the proposed system easily scales up to millions of prototypes. However, many question still remain to be answered. First and foremost, this kind of Knowledge Representation needs to get traction on the Web, which is a considerable challenge - but one we believe can be achieved based on early feedback we obtained. Furthermore, a larger deployment of this kind of system would need a clear mechanism for resolving non-local prototypes. We did some experiments in this direction in a technical report using existing web technologies like HTTP for this, but still there are many options to investigate. We would like to see what kind of options others come up with to introduce useful parts of class-based systems into the prototype world. Finally, we hinted towards finding ‘hidden’ classes in the Prototype system. This would not only be an academic exercise, but would be very useful to be able
to compress knowledge base representations and reduce communication costs. We hope that this paper contributes constructively to the field of Knowledge Representation on the Web and that in the future, more researchers will explore different directions to see how far we can reach.
Acknowledgments
Stefan Decker would like to thank Pat Hayes, Eric Neumann, and Hong-Gee Kim for discussions about Prototypes and Knowledge Representation in general.
Michael Cochez performed parts of this research at the Industrial Ontologies Group of the University of Jyväskylä, Finland, and at the Insight Centre for Data Analytics in Galway, Ireland.
References
|
{"Source-Url": "https://jyx.jyu.fi/bitstream/handle/123456789/52568/knowledgerepresentationprototypes%20002.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 8402, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 46501, "total-output-tokens": 11188, "length": "2e13", "weborganizer": {"__label__adult": 0.0003769397735595703, "__label__art_design": 0.0007920265197753906, "__label__crime_law": 0.0004954338073730469, "__label__education_jobs": 0.0019407272338867188, "__label__entertainment": 0.00013744831085205078, "__label__fashion_beauty": 0.0002086162567138672, "__label__finance_business": 0.00043487548828125, "__label__food_dining": 0.0004405975341796875, "__label__games": 0.0005669593811035156, "__label__hardware": 0.000629425048828125, "__label__health": 0.0007567405700683594, "__label__history": 0.0004167556762695313, "__label__home_hobbies": 0.00012886524200439453, "__label__industrial": 0.0005192756652832031, "__label__literature": 0.0009393692016601562, "__label__politics": 0.00035262107849121094, "__label__religion": 0.0006017684936523438, "__label__science_tech": 0.12255859375, "__label__social_life": 0.0001723766326904297, "__label__software": 0.019134521484375, "__label__software_dev": 0.84765625, "__label__sports_fitness": 0.00024771690368652344, "__label__transportation": 0.0005235671997070312, "__label__travel": 0.0002143383026123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42876, 0.03017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42876, 0.83592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42876, 0.87904]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2338, false], [2338, 5353, null], [5353, 7619, null], [7619, 10263, null], [10263, 12069, null], [12069, 13615, null], [13615, 16329, null], [16329, 18989, null], [18989, 21634, null], [21634, 24486, null], [24486, 27592, null], [27592, 30795, null], [30795, 33377, null], [33377, 36527, null], [36527, 39494, null], [39494, 42876, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2338, true], [2338, 5353, null], [5353, 7619, null], [7619, 10263, null], [10263, 12069, null], [12069, 13615, null], [13615, 16329, null], [16329, 18989, null], [18989, 21634, null], [21634, 24486, null], [24486, 27592, null], [27592, 30795, null], [30795, 33377, null], [33377, 36527, null], [36527, 39494, null], [39494, 42876, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42876, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42876, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2338, 2], [2338, 5353, 3], [5353, 7619, 4], [7619, 10263, 5], [10263, 12069, 6], [12069, 13615, 7], [13615, 16329, 8], [16329, 18989, 9], [18989, 21634, 10], [21634, 24486, 11], [24486, 27592, 12], [27592, 30795, 13], [30795, 33377, 14], [33377, 36527, 15], [36527, 39494, 16], [39494, 42876, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42876, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
feed8ef39e3dd97c9a3762458b4c684d7b466998
|
Lightweight Performance Forecasts for Buffer Algorithms
Sebastian Bächle and Karsten Schmidt
Databases and Information Systems Group
Department of Computer Science
University of Kaiserslautern
D-67653 Kaiserslautern, Germany
{baechle,kschmidt}@cs.uni-kl.de
Abstract: Buffer memory allocation is one of the most important, but also one of the most difficult tasks of database system administration. Typically, database management systems use several buffers simultaneously for various reasons, e.g., disk speed, page size, access behavior. As a result, available main memory is partitioned among all buffers within the system to suit the expected workload, which is a highly complex optimization problem. Even worse, a carefully adjusted configuration can become inefficient very quickly on workload shifts. Self-tuning techniques automatically address this allocation problem using periodic adjustments of buffer sizes. The tuning itself is usually achieved by changing memory (re-)allocations based on hit/miss ratios, thereby aiming at minimization of I/O costs. All techniques proposed so far observe or simulate the buffer behavior to make forecasts whether or not increased buffer sizes are beneficial. However, database buffers do not scale uniformly (i.e., in a linear fashion) and simple extrapolations of the current performance figures can easily lead to wrong assumptions. In this work, we explore the use of lightweight extensions for known buffer algorithms to improve the forecast quality by identifying the effects of varying buffer sizes using simulation. Furthermore, a simple cost model is presented to optimize dynamic memory assignments based on these forecast results.
1 Introduction
Dynamic database management gained a lot of attention and visibility during recent years and led to various self-tuning approaches. As I/O reduction is one of the most important aspects, automatized buffer memory management has always been one of the building blocks for (self-)tuning of database systems. Data placement decisions but also variations in access patterns, page sizes, access speed, read/write characteristics, or prices of storage devices suggest the support of multiple buffers to optimally exploit existing I/O bandwidth. Memory partitioning, however, frequently entails memory waste, because some buffers may be underused while others are overused. Here, only continuous monitoring of system performance may assure adequate usage of the total memory budget and regular adjustment of buffer allocations at runtime, thereby enabling minimization of waste.
The decision when and which buffers have to be resized requires a cost-based model together with buffer techniques (i.e., page mapping, propagation algorithm) that are self-tunable at runtime. The quality of a decision depends on the cost model itself and the
accuracy of forecasts. However, database buffers typically scale non-uniformly (i.e., in a non-linear fashion) and simple extrapolations of current performance figures can easily lead to wrong assumptions. In the worst case, the redistribution of buffer memory results in unintended buffer sweeps followed by excessive I/O thrashing, which again increases the time to pour oil on troubled waters. In our opinion, self-tuning components should therefore follow a strict “Don’t be evil” policy.
Most tuning approaches aim at maximum speedup, i.e., they focus on the identification of the greatest profiteer when more buffer memory can be assigned. Accordingly, they usually shift memory from buffers having low I/O traffic and/or low potential for performance gains to more promising ones. We believe that a sole focus on buffer growth is dangerous, because the risk of wrong decisions comes mainly from the inaccuracy of forecasts concerning smaller buffers. Once a buffer is shrunk too much, it may cause a lot of I/O and, in this way, also affect the throughput of all remaining buffers. Thus, reliable estimations for buffer downsizing are obviously as important as estimations for buffer upsizing. Good forecast quality is further urgently needed in dynamic environments which have to cope with many or intense workload shifts. Here, too cautious, i.e., too tiny adjustments, even when they are incrementally done, are not good enough to keep the system in a well-performing state. Reliable forecasts help to justify more drastic reconfigurations which may be necessary to keep up with workload shifts.
1.1 Forecast of Buffer Behavior
Proposed forecast models for the performance of a resized buffer can be divided into two groups: The first group uses heuristics-based or statistical indicators to forecast buffer hit ratios, whereas the second group is based on simulation. Using heuristics-based approaches, the forecast quality is hard to determine. As a consequence, their use comes with the risk of wrong tuning decisions which may heavily impact system performance. Simulation-based approaches allow trustworthy estimations, but usually only limited to the simulated buffer size. Outside already known or simulated ranges, hit ratios may change abruptly. For this reason, we need forecasts for growing and shrinking buffers.
The performance of a buffer does not scale linearly with its pool size, because mixed workloads containing scans and random I/O can cause abrupt jumps in the hit-ratio trend line as illustrated in Figure 1. These jumps may also lead to differing speed-ups for varying buffer sizes, which again may cause wrong assumptions and decisions.
Performance prediction is always based on information gathered by monitoring, taking samples or (user) hints into account. Hit/miss ratios are the standard quality metrics for buffers, because they are cheap and express the actual goal of buffer use: I/O reduction. Unfortunately, they are useless for performance forecasts, i.e., they even do not allow to make simple extrapolations for growing or shrinking buffer sizes. To illustrate this fact, let us assume the following scenario for a given buffer size of 5 and LRU-based replacement. At the end of a monitoring period, we observed 5 hits and 10 misses. At least two different access patterns may have led to these statistics:
Scenario 1: 1, 2, 3, 4, 5, 1, 1, 1, 1, 6, 7, 8, 9, 10, ...
Scenario 2: 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, ...
In the first scenario, 5 hits are attributed to repeated accesses of page 1, whereas, in the second scenario, the hits are attributed to 5 different pages (1, 2, 3, 4, 5). For the same scenarios and a buffer of size 2, we get completely different hit (h) and miss (m) statistics:
Scenario 1: m, m, m, m, m, h, h, h, h, m, m, m, m, ...
Scenario 2: m, m, m, m, m, m, m, m, m, m, m, m, m, ...
Obviously scenario 1 obtains a better hit rate with 4 hits to 11 misses than scenario 2 without any hit. If we increase the buffer instead to hold 6 pages in total, the picture turns again:
Scenario 1: m, m, m, m, m, h, h, h, h, h, m, m, m, m, ...
Scenario 2: m, m, m, m, m, h, h, h, h, h, m, m, h, h, ...
Now we observe 5 hits to 10 misses for scenario 1 and 10 hits to 5 misses for scenario 2. This example shows that hit/miss numbers or page/benefit metrics do not allow for correct extrapolations, because the order of page requests and the hit frequency distribution are important. Thus, self-tuning relies on monitoring and sampling of data where current buffer use is taken as an indicator for the future. Information relevant for resizing forecasts such as re-use frequencies, working set size, or noise generated by scans cannot be expressed in single numbers.
Instead, the ideal starting point for buffer forecasts is the replacement algorithm used for a buffer. Its statistics incorporate a lot more information about these relevant aspects than
any other performance marker. Today, substantial research has already been performed to develop adaptive replacement algorithms, hence, it is safe to assume that such algorithms are “optimally” operating for the available memory. The question is now how to leverage this implicit knowledge for performance forecasts. As we will demonstrate in the remainder of this paper, it is difficult but not impossible to get accurate estimates for buffer downsizing. In combination with already known simulation methods for the estimation of buffer upsizing, we can then build a lightweight framework for dynamic buffer management.
1.2 Related Work
Optimal buffer management has been a key aspect in database system research since the very early days. Thus, various aspects such as the underlying disk model, search strategies within a buffer, replacement algorithms, concurrency issues and the implications of the page layout have been intensely studied [EH84]. Nevertheless, the complexity of buffer management did not allow to distill an optimal configuration for all different kinds of workloads and system environments. Instead, self-tuning mechanisms were explored to resolve performance bottlenecks at runtime.
One early self-tuning approach hints at specific access patterns like scans or index traversals to the buffer to optimize victim selection [JCL90]. This allows to outperform standard LRU-based algorithms but addresses only a single aspect of dynamic buffer management. In [NFS95], the authors give a theoretical base for the combined analysis of buffer sizing decisions and the influence of access patterns. [Dia05] models buffer load balancing as a constrained optimization problem and investigates the application of control theory and optimization theory.
In [SGAL+06], control theory, runtime simulation, and cost-benefit analysis are integrated into a self-tuning framework. The presented forecast technique SBPX serves also as our baseline and is introduced in detail in Section 2. Some heuristic forecast techniques are presented in [BCL93, MLZ+00]. The analytical work in [THTT08] derives an equation to relate miss probability to buffer allocation. Finally, [DTB09] proposes a brute-force step-by-step approach to determine the optimal configuration for an entire DBMS.
1.3 Contribution
In this work, we study two major prerequisites for self-tuning buffer memory allocation: cost determination and decision making. As the main objective of buffer tuning is I/O reduction and main memory management, decisions based on I/O costs are required to efficiently distribute available memory among all buffer pools. In particular, we look at overhead and quality for buffer undersizing and oversizing forecasts to estimate I/O costs for alternative configurations.
We present ideas to integrate low-overhead forecast capabilities for several common buffer algorithms and assess their feasibility in experiments. Furthermore, we show how these forecasts can be used for nearly riskless self-tuning decisions. Eventually, a short evaluation is revealing prospects of simulation-based buffer tuning as well as its limitations.
The remainder of this paper is organized as follows: Sections 2 and 3 discuss forecast techniques for buffer upsizing and downsizing, respectively. In Section 4, we present a decision model for a self-tuning component. The results of our experiments are shown in Section 5. Finally, Section 6 concludes the paper.
2 Forecast of Buffer Upsizing
The obvious way of accounting I/O costs for alternative buffer sizes is to fully simulate each of them for the same page reference string, i.e., page request sequence. Of course, a simulation of the propagation behavior for page numbers is sufficient; the actual payload data need not be kept in memory. Nevertheless, this approach requires additional data structures, such as hash maps for lookup, lists for the replacement algorithm, and virtual pages. Moreover, each buffer request has to be processed multiple times, i.e., page lookup and replacement maintenance for each simulated configuration. Obviously, the overhead of such a solution is prohibitive. In contrast, cheaper solutions may be less accurate, but still achieve meaningful results for resizing decisions.
Our buffer self-tuning refinements are inspired by the SBPX framework [SGAL+06], which approximates the benefit of a larger buffer through “buffer extension”. This extension is simply an overflow buffer for the page identifiers of the most recently evicted pages. The overflow buffer must, of course, have its own strategy for victimization. The authors of SBPX recommend here a strategy “similar to that of the actual buffer pool” [SGAL+06].
When a page miss in the actual buffer occurs, the extension checks if the page identifier is found in the overflow buffer, i.e., if the page would have been present in a larger buffer. In that case, we can account a “savings” potential for upsizing. Further, we must now maintain the overflow buffer. The page identifier of the actual evicted page is promoted to the overflow buffer, which in general requires to evict another page identifier from the overflow buffer. This replacement is not exactly the same as a real miss in the simulated larger buffer. The identifier of the requested page causing the miss could have been present in the larger buffer. In the course of continuous requests, however, also a larger buffer must evict pages. Thus, a replacement in the overflow buffer can be regarded as a “delayed” replacement effect. In the case of a page hit in the actual buffer, no further bookkeeping is required, because the locality principle suggests that the replacement strategy in a larger buffer holds a superset of the pages present in a smaller one. Listing 1 shows a sketch of the modified page fix routine.
The problem of this approach is that replacement decisions for two separate buffers in combination are not necessarily the same as for a single large buffer. Thus, the forecast quality of upsizing simulations depends on one aspect: When a page is evicted from the
actual buffer and promoted to the overflow area, we must be able to transfer “state” in-
formation (e.g., hit counters, chain position, etc.) from the actual replacement strategy
into the overflow strategy (lines 17 and 20). Otherwise, the overflow strategy behaves
differently.
Listing 1: Modified page fix algorithm for upsize simulation
```java
Frame fix(long pageNo) {
Frame f = mapping.lookup(pageNo);
if (f != null) {
strategy.refer(f); // update replacement strategy
... // and statistics
} else {
Frame of = overflowMapping.lookup(pageNo);
if (of != null) {
overflowMapping.remove(of.pageNo);
... // update overflow hit statistics
} else {
of = overflowStrategy.victim();
overflowBuffer.remove(of.pageNo);
... // update overflow miss statistics
}
}
Frame v = strategy.chooseVictim();
strategy.copyStateTo(overflowStrategy);
v.copyStateTo(of); // transfer page identifier to overflow
overflowMapping.put(of.pageNo, of);
mapping.remove(v.pageNo);
... // replace page in frame v
strategy.referAsNew(v); // update replacement strategy
... // and statistics
mapping.put(pageNo, v);
}
```
3 Forecast of Buffer Downsizing
As shown above, knowledge about the performance gain through a larger buffer is use-
ful to determine the greatest profiteer of more memory among several buffers. However,
the question for the buffer(s), which may be safely shrunk without suffering from severe
penalties, remains unanswered. The authors of SBPX extrapolated downsizing costs as the
inverse of savings potential gained through upsizing [SGAL+06]. For buffer sizes close
to the unknown (!) borders of working set sizes, however, this bears the risk of wrong de-
cisions. Therefore, we developed a simple mechanism to find out if page hits would have
been also page hits in a smaller buffer. In combination, the SBPX technique allows us now
to determine which buffer profits the most from additional memory, while our approach
helps us to determine which buffer suffers least from downsizing.
The goal of buffer replacement algorithms is the optimized utilization of data access local-
ity, i.e., to keep the set of the currently hottest pages that fits into memory. Accordingly, a
small buffer is assumed to keep an “even hotter subset” of the pages that would be present
in the actual buffer. Based on this assumption, we denote a subset of the pages in a buffer
of size $n$ as $hotset_k$, if it would be kept in a smaller buffer of size $k$. The key idea of our approach is to keep track of this hotset during normal processing. When a page is found in the buffer and belongs to the hotset, it would have been a hit in the smaller buffer, too. However, if a requested page is in the current buffer but not in the hotset, the smaller buffer would need to evict another page, which must be, of course, part of the current hotset and load the requested page from disk. Here, we only have to maintain the hotset. The page that would have been evicted from the smaller buffer is removed from the hotset and the requested page is added to the hotset. Each swap is accounted as a page miss for the simulated smaller buffer.
Of course, a page miss in the current buffer would also be a page miss in a smaller buffer. Accordingly, we have to select a replacement victim for both the current buffer and the (simulated) smaller buffer. The real victim page is now replaced with the new page and swapped with the virtual victim of the smaller buffer into the hotset. The modified page fix algorithm is shown in Listing 2.
**Listing 2: Modified page fix algorithm for downsize simulation**
```java
1:Frame fix(long pageNo) {
2: Frame f = mapping.lookup(pageNo);
3: if (f != null) {
4: if (!f.hotSet) {
5: Frame v = strategy.chooseHotSetVictim();
6: f.hotset = true; // swap frame to hotset
7: v.hotset = false;
8: strategy.swapHotset(f, v);
9: ... // update simulated statistics
10: strategy.refer(f); // update replacement strategy
11: ... // and statistics
12: }
13: strategy.referAsNew(f);
14: mapping.put(pageNo, v);
15: } else {
16: Frame v = strategy.chooseVictim();
17: mapping.remove(v.pageNo);
18: ... // replace page in frame v
19: if (!v.hotSet) {
20: Frame hv = strategy.chooseHotSetVictim();
21: hv.hotSet = false; // swap frame to hotset
22: v.hotSet = true;
23: strategy.swapHotset(f, v);
24: ... // update replacement strategy
25: ... // and statistics
26: }
27: strategy.referAsNew(v);
28: mapping.put(pageNo, v);
29: }
30: }
```
Note that a real replacement victim is generally not expected to be part of the current hotset, because this would imply that the replacement strategy evicts a page more recently accessed. In some algorithms, however, such counter-intuitive decisions might be desired, e.g., to explicitly rule out buffer sweeps through large scans. Then, we must not maintain the hotset at all.
Obviously, the overhead of this approach is very small. We only need a single bit per buffer frame to flag the hotset membership and must determine a swap partner, when a new page enters the hotset. Furthermore, the simulation does not influence the quality of the current buffer, i.e., the strength of the replacement strategy is fully preserved. As said, the choice of the hotset victim is dependent on the used replacement strategy to reflect the behavior of
the strategy in a smaller buffer correctly. In the following, we will investigate hotset victim determination for four popular families of replacement algorithms. In particular, we want to know if it is possible to predict replacement decisions for a smaller buffer based on the implicit knowledge present.
### 3.1 LRU
The LRU algorithm embodies a very simple, yet effective replacement strategy. It evicts always the least recently used page from a buffer. Typically, it is implemented as a doubly-linked list as shown in Figure 2.
On request, a page is simply put to the head of the chain. Thus, LRU finds its replacement candidate always at the tail. Accordingly, the last $k$ pages of the LRU chain in a larger buffer of size $n$ are identical with the $k$ pages in the simulated smaller buffer of size $k$ and the hotset victim page is found at the $k$-th position from the head. The overhead of pointer dereferencing to position $k$ can be avoided with marker pointer, which is cheap to maintain. Hence, the hotset victim is guaranteed to be identical to the victim as in the smaller buffer and the simulation is fully precise. Evidently, the simplicity of LRU even allows to easily simulate at the same time the effects when the current buffer would be reduced to different smaller sizes, which is especially useful for precise step-wise tuning decisions. It is sufficient to place a marker at each desired position.
### 3.2 LRU-K
The LRU-K algorithm [OOW99] follows a more general idea of LRU and takes the last $K$ references of a page into account. By doing so, it is “scan-resistant” and less vulnerable to workloads where frequently re-used pages mix with those having hardly any rereference. For each page, LRU-K maintains a history vector with the last $K$ references and the timestamp of its last reference. Furthermore, history vectors of already evicted pages are retained for re-use if an evicted page is requested again within the so-called retained information period (RIP). The replacement victim is only searched among those pages that have been buffered for at least a predefined correlated reference period (CIP). The rationale behind this idea is to prevent a drop of pages immediately after their first reference. For further details on CIP, history maintenance, etc., we refer to the original paper.
The victim page is determined by the maximum backward $K$-distance, i.e., the page with the earliest reference in the history vector. Thus, although implemented differently, LRU-K behaves for $K = 1$ the same as LRU. The hotset victim is chosen accordingly as shown in Listing 3. Note that implementations of LRU-K usually maintain a search tree for that. For simplicity, we present here the modification of the unoptimized variant as in the original paper.
Due to the history update algorithm described in [OOW99], more than one victim candidate can exist. This could become a problem for our simulation, because a real buffer might choose a different victim than simulated. Therefore, we simply evict the candidate with the least recent reference (line 12). As the timestamp of the last access is unique, our simulation will be accurate here. Instead, the choice of RIP turns out to become a problem. If the garbage collection for history entries is not aligned, pages that re-enter the smaller buffer will be initialized differently than in simulation, which may affect future replacement decisions.
Listing 3: LRU-K hotset victim selection
```java
Frame hotSetVictim() {
long min = t;
long minLast = Long.MAX_VALUE;
Frame v = null;
for (int i = 0; i < pages.length; i++) {
Frame p = pages[i];
History h = p.history;
if ((p.hotSet) && (t - last > CIP)) {
long last = h.last;
long dist = h.vector[k - 1];
if ((dist < min) || ((dist == min) && (last < minLast))) {
victim = p;
min = hist.vector[k - 1];
}
}
}
return v;
}
```
### 3.3 GCLOCK
The third strategy is GCLOCK [NDD92], which stands for generalized clock algorithm. Like LRU-K, it takes the reference history of a page into account. In contrast to LRU-K, however, it is likely to degrade through scans but can be implemented with less computational and space overhead. The buffer itself is modeled as a circle of buffer frames, i.e., the clock. Each frame also maintains a simple reference counter, which is incremented for each reference to that specific page. For victim selection, the “clock hand” circles over all frames and decrements the reference counters. The clock hand stops at the first frame where the reference counter drops below zero. So, frequently referenced pages remain longer in the buffer, because they have higher reference counts.
The determination of a hotset victim is straightforward: We simply have to circle over the frames and look for the first hotset page whose reference counter would first drop below zero. Obviously, this is the page with the minimum reference counter. The algorithm is sketched in Listing 4.
Listing 4: GCLOCK hotset victim selection
```java
Frame chooseHotSetVictim()
{
Frame v = null;
int h = clockHand;
for (int i = 0; i < size; i++) {
Page p = circle[(++h % size)];
if (p.hotSet) {
if (p.count == 0) {
return v;
} else if ((v == null) || (p.count < v.count)) {
v = p;
}
}
}
return v;
}
```
Again, this only approximates the behavior of a smaller buffer with GCLOCK. There are two reasons: First, the angular velocity of the clock hand in a smaller buffer is higher because there are less frames. Second, the circular arrangement of buffer frames makes the algorithm inherently dependent on the initial order. Thus, victim selection is not only a matter of the page utilization, but also a matter of clock-hand position and neighborhood of frames. Using a second clock hand (i.e., pointer) walking solely over the hotset frames is necessary to address differing round trips. However, swapping of frame positions when the hotset is maintained would impact behavior of GCLOCK in the actual buffer – a circumstance, we want to avoid. To improve forecast quality, we implemented the smaller circle, i.e., the hotset, with forward pointers for hotset pages that point to the logical next one. In case of swapping (see lines 8 an 21 in Listing 2), only the forward pointer and a hotset counter for that page need to be maintained. In Section 5, we will show that these minor efforts can lead to almost perfect estimations.
### 3.4 2Q
The 2Q algorithm [JS94] is a simplified way of imitating LRU-2, which is noted for delivering good hit ratios but often poor performance due to its complex algorithm. In essence, 2Q is a combination of FIFO and LRU. On the first reference, 2Q places a page in a FIFO queue (denoted \(a_1\)). The first re-reference of a page in the \(a_1\) queue promotes it to the LRU chain (denoted \(a_m\)). The effect of these two “stages” is that only hot pages are promoted to the LRU chain, which tends to keep cold pages longer than necessary. These cold pages, i.e., pages that are accessed only once within a longer time period are now dropped earlier by the FIFO queue. An extended version of 2Q splits the FIFO queue to keep track of rereferences to pages evicted from the FIFO queue [JS94]. The effect is similar to the history caching of LRU-K and comes with queue sizing problems for forecasts, too.
Sizing problems also arise for the FIFO queue and the LRU chain in the standard algorithm. Therefore, we used a simplified variation of 2Q where all buffer frames are assigned to the LRU chain and the FIFO queue only stores references to the pages in the LRU chain. So, it serves like an index for the LRU chain to identify pages referenced only once so far. Victims are primarily selected from the FIFO queue to replace those pages earlier. A subtlety of 2Q is here that the FIFO queue must not be drained to give new pages a chance for rereference and promotion to the LRU chain. The minimum fill degree of the FIFO queue is a configurable threshold. For simulation, we must therefore count the number of hotset entries in the queue, to be able to decide when a smaller buffer would pick a victim from the FIFO queue and not from the LRU chain. Also, the threshold must be the same for both sizes. Although this results in uniform retention times within the FIFO queue for differing LRU chain sizes, it is acceptable to some degree, because the threshold models the granted window for references of new pages. The hotset victim selection is sketched in Listing 5.
Listing 5: 2Q hotset victim selection
```java
Frame chooseHotSetVictim()
{
Frame v;
if ((a1.numberOfHotsetEntries() > threshold)) {
v = a1.head();
while (!v.hotSet) v = v.a1Next; // Follow FIFO queue to first hotset page
} else {
v = am.head();
while (!v.hotSet) v = v.amNext; // Follow LRU chain to first hotset page
}
return v;
}
```
4 Buffer Tuning
The crucial point in database tuning is the difficulty to precisely predict how a tuning decision will affect system performance. Even experienced database administrators with a deep knowledge of the workload and the database product itself regularly face this challenge. They rely on the assistance of sophisticated monitoring tools to prevent negative effects of their tuning decisions on the production system. Often they also run several observe-analyze-adjust cycles with reference workloads beforehand on dedicated test systems. Of course, this is time-consuming and expensive. Built-in self-monitoring and tuning components can ease this dilemma and reduce the risk of wrong decisions through rather small but continuous and incremental adjustments. In dynamic environments, however, those mechanisms may react too slow to keep up with the rate of workload shifts or short-term resource allocation for higher-level tuning decisions like auto-indexing. Therefore, we aim towards a re-formulation of the central question of automatic tuning from “Which adjustment certainly will give the greatest performance benefit?” to “Which adjustment most likely will give a performance benefit but certainly not result in a performance penalty?”. In other words, when we know that our reconfigurations will not harm, we get the freedom to try quicker and more aggressive tuning options.
In general, the total amount of buffer memory is limited and so the decision to assign more memory to a certain buffer is directly coupled with the decision of taking this memory from one or several others. Fortunately, the performance optimization heuristics for I/O-saving buffers (e.g., data pages, sorting) is straightforward: The more main memory can be used the better. Even an oversized buffer, i.e., a buffer larger than the actual data to be buffered, is less likely to become a performance bottleneck due to bookkeeping overhead. It is just a waste of main memory. The downsizing of a buffer, however, comes along with severe risks: the buffer’s locality may drastically decrease and even turn into thrashing causing excessive I/O, which also influences throughput of other buffers. Accordingly, we concentrate on the forecast of the negative effects of memory reallocations and base our tuning decisions not only, as common, on the estimated benefits, but also on vindicable forecasts of additional costs.
4.1 Cost Model
Automatic tuning needs to derive costs from system state or from system behavior to quantify the quality of the current configuration. Additionally, it also needs to estimate the costs of alternative configurations to allow for comparison. Ideally, these costs comprise all performance-relevant aspects including complex dependencies between system components and future workload demands in a single number to allow for perfect decisions. Clearly, such a perfect cost model does not exist in practice. Instead, costs are typically derived from a mixture of cheaply accounted runtime indicators and heuristics-based or experience-based weight factors. The hope is to reflect at least the correct relationship between alternative setups w.r.t. to performance. The more precise this much weaker requirement can be met, the easier we can identify hazardous tuning decisions before they boomerang on the system.
In contrast to computational costs of a specific algorithm, costs expressing the quality of a buffer are inherently dependent on the current workload. Buffering 5% of the underlying data, for example, can be an optimal use of main memory at one moment, but become completely useless a few moments later. Therefore, each cost value is a snapshot over a window at a certain point in time with limited expressiveness for at most few periods ahead in the future. We define the general goal function for our tuning component: At a given point in time \( t \) with a configuration \( c \), find a configuration \( c' \) that has less accumulated I/O costs over the next \( n \) periods. The optimal window size and the number of forecast periods again depend on the actual workload; slowly changing workloads enable more precise cost estimations for longer periods, while rapidly changing workloads also decrease accuracy of future costs.
For simplicity, our cost model only considers buffer service time, i.e., the time needed to handle a page fix request. Of course, costs assigned to a specific buffer are dominantly determined by the number of I/Os performed. On a buffer miss (denoted \( m \)), a victim page has to be selected for replacement and flushed, if necessary, before the requested page is fetched from disk. Accordingly, a buffer miss causes at least one read operation, but may also cause several writes for flushing write-ahead log and victim page. The ratio between
reads and synchronous writes is reflected by a weight factor $f_{\text{dirty}}$, which may vary over time and from buffer to buffer.
Depending on the characteristics of the underlying devices or blocking times under concurrent access, I/O times can also vary between various buffers. Hence, the costs of all buffers must be normalized to a common base to become comparable. We use here a second weight factor $w_{\text{buffer}}$ for each buffer. As the time needed for a single I/O operation is easy to measure, these factors can be derived and adjusted at runtime causing low overhead. Finally, the cost of a buffer at the end of time period $t$ is expressed as:
$$c_{\text{buffer}}(t) = w_{\text{buffer}}(t) \cdot (1 + f_{\text{dirty}}(t)) \cdot m(t)$$
Note, we assume that CPU costs can be safely ignored, either because they are independent of whether an operation can be performed on buffered data or requires additional I/O, or because additional CPU cycles for search routines in larger buffers are negligible compared to an I/O operation. In the remainder of this paper, we also assume that read and write operations have symmetric costs and a low variance. However, it should be evident that the presented basic model can be easily extended to take asymmetric read/write costs (e.g. for solid state drives), different costs for random and sequential I/O, and also the apportionment of preparatory, asynchronous flushes of dirty pages into account.
### 4.2 Decision Model
Our buffer balancing is based on the cost model of Section 4.1. In certain intervals, the buffer configuration is analyzed and optimized if main memory reallocations are promising reduced I/O costs for the entire system.
After each monitoring period, the buffer pools are ranked by their cost estimations as follows. The higher a buffer pool is ranked in the save list, the more costs can be saved (i.e., this equals to providing a higher benefit) by referring to the simulated buffer oversize. On the other hand, buffer pools are also ranked by their cost estimations for undersize figures, whereas the minimum cost increase is ranked top in the rise list. Using a greedy algorithm, buffer pool pairs are picked from the top of both lists as long as the cost reduction on the save list is higher than the increase on the rise list. Note, a buffer may occur in both lists, which typically indicates a “jump” and is thereby easily recognized. Finally, resize mechanisms are employed to perform the memory “shifts”. The selected buffer from the save list is increased to allow more frames and references to be cached. A buffer chosen from the rise list, however, is shrunken, which may also include to flush victims to achieve a smaller buffer size. Note, an optimal solution is always achievable, but certainly requires more efforts. Therefore, we use greedy optimization because it is fast, cheap, and fairly good.
Oversize and undersize simulations for several buffer pools do not necessarily have the same size in bytes, which complicates memory shifts. However, fine-grained assignments may be required, which are also possible by extrapolating the buffer scaling figures between its real size and the simulated sizes.
To avoid thrashing, buffers chosen for resizing are removed from both ranking lists. The simulated undersize and oversize areas have to be adjusted as well, which is similar to the “regular” buffer resize. For instance, the number of hotset pages is reduced by selecting victims out of this subset and by switching their flags. Obviously, oversize areas can be kept or resized as desired.
Although resize decisions are sometimes heavy-weight operations (e.g., flushing pages), they only occur at the end of each monitoring period and are only performed as long as expected benefits justify them.
**Period Refinements for Simulated Buffer Sizes**
Accounting hit/miss numbers for multiple simulated and real buffer sizes over a certain period of time induces estimations errors. For instance, a smaller buffer causing more misses requires more time to process the same amount of buffer requests as the real one. On the other hand, a larger buffer having an improved hit ratio may require less time to process the requests, which are considered during this simulation and tuning period. Therefore, simulation-based cost accounting has to reuse the cost model’s I/O weights for read and write operations to adjust the (simulation) periods. That means, undersize simulation has to limit I/O accounting as soon as the I/O budget that is physically possible is consumed and vice-versa for SBPX extensions.
**Switchable Propagation Algorithms**
Adjusting memory assignments for buffer pools is also limited to the scalability prospects of a specific buffer algorithm. However, different buffer algorithms may perform differently and exchange of an algorithm would be an alternative tuning option without actually shifting memory. But different algorithms tend to use manifold figures such as access counters, timestamps, or history queues. The major problem is to carry over the current information when switching to a new algorithm. A poor alternative is to reset the entire propagation strategy. However, a practical way is to initialize the new algorithm by evicting all the “old” pages into the new algorithm and continue to use the new algorithm. The decision to switch the algorithm can only be based on a full simulation of an alternative propagation algorithm relying on a similar cost model as presented in Section 4.1.
**5 Evaluation**
Here, we want to evaluate the accuracy of our extensions as well as the decision quality for buffer balancing. But first, we have to describe our benchmark scenarios and their workloads.
As already stated in the introduction, buffers do not scale uniformly; thus, we generated reference strings for various (common) scenarios including random and sequential access of varying sizes.
5.1 Workload
In Figure 3(a)-3(d), we analyze the critical buffer size ranges for various access patterns whose characteristics are summarized in Table 1. Note, the total number of DB pages is equal to the first column’s object size figure of each scenario in this table. The only uniformly scaling buffer is measured for workloads dominated by random I/O (see Figure 3(a)), where the overall hit ratio is – as expected – quite low. In this case, re-sizing extrapolations will work properly, but such an access behavior is unusual in databases.
Dominating scans mixed with random access are modeled and measured in Figure 3(b). Although scan resistance is addressed by replacement algorithms, scan effects easily provoke “jumps” in the buffer performance. In such cases, the buffer hit rate dramatically increases as soon as often occurring scans entirely fit into the buffer. Such “jumps” remain undetected if monitoring happens only at one side of the “jump”. The third workload shown in Figure 3(c) is a mixture of multiple scans and random accesses in a single buffer. This scenario may represent a more typical buffer usage pattern which exhibits a realistic buffer scaling. In Figure 3(c), several areas can be identified having different slopes, where each area boundary may cause uncertainty for extrapolations. In the last sample workload shown in Figure 3(d), a mixture of high-locality scans and some share of random accesses is analyzed. This typical workload scenario causes several (small) “jumps” resulting in a
<table>
<thead>
<tr>
<th>Workload characteristics</th>
<th>Figure 3(a) (random)</th>
<th>Figure 3(b) (scan)</th>
<th>Figure 3(c) (jumps)</th>
</tr>
</thead>
<tbody>
<tr>
<td>request share in %</td>
<td>50</td>
<td>50</td>
<td>25</td>
</tr>
<tr>
<td>object size (pages)</td>
<td>150k</td>
<td>22k</td>
<td>7k</td>
</tr>
<tr>
<td>access type</td>
<td>rnd</td>
<td>rnd</td>
<td>seq</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Workload characteristics</th>
<th>Figure 3(d) (real)</th>
</tr>
</thead>
<tbody>
<tr>
<td>request share in %</td>
<td>10</td>
</tr>
<tr>
<td>object size (pages)</td>
<td>250k</td>
</tr>
<tr>
<td>access type</td>
<td>rnd, seq</td>
</tr>
</tbody>
</table>
Figure 4: Estimation accuracy for workload random (buffer calls ×100.000 on x-axis)
stair-case pattern. In this case, fine-grained extrapolations necessary for buffer tuning may quickly fail, although the slope in the average is quite similar.
We want to show in the subsequent sections that our algorithms are capable of identifying and handling all of these (more or less) typical workload scenarios.
5.2 Accuracy
The quality of buffer balancing is based on the estimation quality of our extended buffer algorithms. Therefore, we need to evaluate the estimation accuracy for the differing workloads. For the following experiments, the gray-shaded areas in Figures 3(a)–3(d) specify the simulated ranges centered around the actual buffer sizes indicated by the black lines. For simplicity, we always use a fixed range of ±2% of the total DB size. For each workload, we measure the undersize and oversize estimation accuracy. Each of the Figures 4–7 contains the results of five algorithms using the same workload and up to 1.2 Mio buffer calls. The lines marked with an asterisk (*) illustrate the simulation-based hit ratios and, to enable comparison, the others show those of real buffers having the same sizes.
The first graphs are always showing the standard LRU behavior, which is always delivering perfect estimation accuracy; however, its hit ratio performance is not the best. But its lightweight simulation is definitely a plus. In contrast, the LRU-K results (second graphs) constantly indicate top hit ratios but show weaknesses in forecast quality. Especially, the downsize simulation of the scan workload fails with a dramatic overestimation.
The results for GCLOCK in Figure 5 and 7 (third graphs) reveal its sensitivity to page order and clock-hand position for hotset simulations. By adding a second clock hand and forward pointers to simulate a separate clock for the hotset pages, we achieve considerably better accuracy (fourth graph), but its performance is always behind all other strategies.
On the right-hand side, we measure the forecast quality provided by the simplified 2Q algorithm. In all scenarios, it delivers top results while only requiring low maintenance overhead. However, forecast quality is disappointing in some scenarios. Similar to LRU-K, it fails for workload scan, but in the opposite direction with underestimation. Further, we observe a suddenly degrading forecast quality for the workloads jumps and real. Even worse, oversize estimations as well as undersize estimations are affected. Even the use of a separate policy for the oversize buffer does not lead to better results.
The experiments reveal that our simulations based on the locality principle lead to trustworthy estimations in many cases. On one side, simple algorithms like LRU and GCLOCK fit well into our framework. On the other side, more advanced algorithms such as LRU-K and 2Q also allow lightweight estimations, but suffer from unpredictable estimation errors in some scenarios. The reasons are built-in mechanisms to achieve scan-resistancy, which are hard to model in simulations. Further, these algorithms do not allow logical composition of individual buffers.
In Figure 8, the self-tuning mechanism presented in Section 4.2 automatically tunes two buffers, where buffer 0 was fed with random workload from Figure 3(a) and buffer 1 with scans shown in Figure 3(b). Buffer sizes (i.e., simulation and real) are chosen as described in Section 5.2. Due to space limitation, we present the results only for the improved GCLOCK and a fixed memory shift granularity of 2% of the DB size. After the buffers were warmed up (i.e., after 1.2 Mio buffer calls), the cost model triggers all memory shifts. The random workload buffer was shrunk according to its hotset simulation, whereas buffer 1 was increased. Although the hit ratio of buffer 0 slightly descends, the overall I/O performance improves, because the hit ratio of buffer 1 increases considerably.
Because the self-tuning decisions are based on a cost model, they are applicable for arbitrary scenarios. In our second example, we again use two buffers, one that is fed from the jumps workload generator and the other from the real workload generator as shown in Figure 3(c) and Figure 3(d). In this setting, SBPX fails because its does not recognize that the size of buffer 0 is close to a “jump” boundary. However, as indicated by Figure 9, our downsizing simulation detects the pitfall and prevents buffer performance penalties.
5.3 Buffer Balance
Resizing two buffers is obviously simple. Therefore, we combine both experiments in a single setup shown in Figure 10. The cut-out shows two memory shifts leading to minor descends of the hit ratio on the one side but clear improvements on the other side resulting in a steadily improved buffer performance.
In summary, we could experimentally prove that buffer balancing can be achieved at low cost, but it heavily depends on accurate and lightweight forecasts for both directions – upsize and downsize.
6 Conclusions
Even after decades of research on buffer management and optimization, the problem of a reliable, dynamic adaptation of buffer memory allocation is not fully solved. In this work, we studied opportunities to forecast buffer resizing effects to support harm-free self-tuning decisions. As downsizing a buffer is accompanied with severe risks of thrashing, we argued that reliable prediction of downsizing effects is a key point for self-tuning decisions. Furthermore, we argued that additional overhead for these forecasts must not add noticeable overhead to normal processing. Therefore, we focused on lightweight techniques to exploit knowledge from the buffer replacement strategies for forecasts and presented possible solutions for four families of replacement algorithms.
In our experiments, we could show that forecast quality is heavily dependent on the actual strategy. It seems that sophisticated strategies like LRU-K and 2Q make it hard or even impossible to get reliable forecasts for either upsizing, downsizing, or both. We found that there are two reasons for this: First, such algorithms use history-recording techniques, which are very costly to emulate for varying sizes. Second, they are extremely sensible to configuration parameters, which cannot be easily negotiated between differing buffer sizes. However, simpler, yet widely-used strategies like LRU and GCLOCK turned out to allow for cheap and highly accurate or even perfect forecasts. In conjunction with a
simple cost model and a greedy algorithm, we demonstrated the use of forecasts to improve buffer hit ratios without the risk of severe performance penalties. Following the idea of differing “stages” in 2Q to improve buffer behavior, our findings suggest to think about further partitioning of buffers with complex replacement strategies into several distinct buffers with simpler but more predictable strategies. This way, forecasts would generally become reliable and fragmentations issues were automatically resolved by the self-tuning capabilities.
References
|
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings180/147.pdf", "len_cl100k_base": 10240, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 49622, "total-output-tokens": 11841, "length": "2e13", "weborganizer": {"__label__adult": 0.00035452842712402344, "__label__art_design": 0.0004220008850097656, "__label__crime_law": 0.0003349781036376953, "__label__education_jobs": 0.0011949539184570312, "__label__entertainment": 0.00012189149856567384, "__label__fashion_beauty": 0.0002034902572631836, "__label__finance_business": 0.0006232261657714844, "__label__food_dining": 0.0003364086151123047, "__label__games": 0.000659942626953125, "__label__hardware": 0.0032787322998046875, "__label__health": 0.0007119178771972656, "__label__history": 0.0003771781921386719, "__label__home_hobbies": 0.00012505054473876953, "__label__industrial": 0.0006933212280273438, "__label__literature": 0.00031828880310058594, "__label__politics": 0.00030684471130371094, "__label__religion": 0.0004799365997314453, "__label__science_tech": 0.251220703125, "__label__social_life": 8.428096771240234e-05, "__label__software": 0.021484375, "__label__software_dev": 0.7158203125, "__label__sports_fitness": 0.00028514862060546875, "__label__transportation": 0.0005850791931152344, "__label__travel": 0.0002157688140869141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50529, 0.02918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50529, 0.32016]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50529, 0.91771]], "google_gemma-3-12b-it_contains_pii": [[0, 2842, false], [2842, 6202, null], [6202, 7778, null], [7778, 10558, null], [10558, 13873, null], [13873, 16391, null], [16391, 19439, null], [19439, 21771, null], [21771, 24225, null], [24225, 26969, null], [26969, 29899, null], [29899, 33321, null], [33321, 36530, null], [36530, 39259, null], [39259, 40787, null], [40787, 43180, null], [43180, 44706, null], [44706, 46049, null], [46049, 48056, null], [48056, 50529, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2842, true], [2842, 6202, null], [6202, 7778, null], [7778, 10558, null], [10558, 13873, null], [13873, 16391, null], [16391, 19439, null], [19439, 21771, null], [21771, 24225, null], [24225, 26969, null], [26969, 29899, null], [29899, 33321, null], [33321, 36530, null], [36530, 39259, null], [39259, 40787, null], [40787, 43180, null], [43180, 44706, null], [44706, 46049, null], [46049, 48056, null], [48056, 50529, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50529, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50529, null]], "pdf_page_numbers": [[0, 2842, 1], [2842, 6202, 2], [6202, 7778, 3], [7778, 10558, 4], [10558, 13873, 5], [13873, 16391, 6], [16391, 19439, 7], [19439, 21771, 8], [21771, 24225, 9], [24225, 26969, 10], [26969, 29899, 11], [29899, 33321, 12], [33321, 36530, 13], [36530, 39259, 14], [39259, 40787, 15], [40787, 43180, 16], [43180, 44706, 17], [44706, 46049, 18], [46049, 48056, 19], [48056, 50529, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50529, 0.03788]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
f76c7254b5f933435f0bbb64b06d1b02229f1f7a
|
Abstract
Fran (Functional Reactive Animation) is a collection of data types and functions for composing richly interactive, multimedia animations. The key ideas in Fran are its notions of behaviors and events. Behaviors are time-varying, reactive values, while events are sets of arbitrarily complex conditions, carrying possibly rich information. Most traditional values can be treated as behaviors, and when images are thus treated, they become animations. Although these notions are captured as data types rather than a programming language, we provide them with a denotational semantics, including a proper treatment of real time, to guide reasoning and implementation. A method to effectively and efficiently perform event detection using interval analysis is also described, which relies on the partial information structure on the domain of event times. Fran has been implemented in Hugs, yielding surprisingly good performance for an interpreter-based system. Several examples are given, including the ability to describe physical phenomena involving gravity, springs, velocity, acceleration, etc. using ordinary differential equations.
1 Introduction
The construction of richly interactive multimedia animations (involving audio, pictures, video, 2D and 3D graphics) has long been a complex and tedious job. Much of the difficulty, we believe, stems from the lack of sufficiently high-level abstractions, and in particular from the failure to clearly distinguish between modeling and presentation, or in other words, between what an animation is and how it should be presented. Consequently, the resulting programs must explicitly manage common implementation chores that have nothing to do with the content of an animation, but rather its presentation through low-level display libraries running on a sequential digital computer. These implementation chores include:
- stepping forward discretely in time for simulation and frame generation, even though animation is conceptually continuous;
- capturing and handling sequences of motion input events, even though motion input is conceptually continuous;
- time slicing to update each time-varying animation parameter, even though these parameters conceptually vary in parallel; and
By allowing programmers to express the “what” of an interactive animation, one can hope to then automate the “how” of its presentation. With this point of view, it should not be surprising that a set of richly expressive recursive data types, combined with a declarative programming language, serves comfortably for modeling animations, in contrast with the common practice of using imperative languages to program in the conventional hybrid modeling/presentation style. Moreover, we have found that non-strict semantics, higher-order functions, strong polymorphic typing, and systematic overloading are valuable language properties for supporting modeled animations. For these reasons, Fran provides these data types in the programming language Haskell [8].
Advantages of Modeling over Presentation
The benefits of a modeling approach to animation are similar to those in favor of a functional (or other declarative) programming paradigm, and include clarity, ease of construction, composability, and clean semantics. But in addition there are application-specific advantages that are in some ways more compelling, painting the picture from a software engineering and end-user perspective. These advantages include the following:
- Authoring. Content creation systems naturally construct models, because the end users of such systems think in terms of models and typically have neither the expertise nor interest in programming presentation details.
- Optimizability. Model-based systems contain a presentation sub-system able to render any model that can be constructed within the system. Because higher-level information is available to the presentation sub-system than with presentation programs, there are many more opportunities for optimization.
- Regulation. The presentation sub-system can also more easily determine level-of-detail management, as well as sampling rates required for interactive animations, based on scene complexity, machine speed and load, etc.
● Mobility and safety. The platform independence of the modeling approach facilitates the construction of mobile applications that are provably safe in World Wide Web applications.
The Essence of Modeling Our goal in this paper is to convey the essence of a modeling approach to reactive animations as captured in Fran, as summarized in the following four concepts:
1. Temporal modeling. Values, called behaviors, that vary over continuous time are the chief values of interest. Behaviors are first-class values, and are built up compositionally; concurrency (parallel composition) is expressed naturally and implicitly. As an example, the following expression evaluates to an animation (i.e., an image behavior) containing a circle over a square. At time \( t \), the circle has size \( \sin t \), and the square has size \( \cos t \).
\[
\text{bigger} \ (\sin \ t) \ \text{circle} \ \text{`over`} \ \text{bigger} \ (\cos \ t) \ \text{square}
\]
2. Event modeling. Like behaviors, events are first-class values. Events may refer to happenings in the real world (e.g., mouse button presses), but also to predicates based on animation parameters (e.g., proximity or collision). Moreover, such events may be combined with others, to an arbitrary degree of complexity, thus factoring complex animation logic into semantically rich, modular building blocks. For example, the event describing the first left-button press after time \( t=0 \) is simply \( \text{lbp \ t} \); one describing time squared being equal to 5 is just:
\[
\text{predicate} \ (t^2 = 5) \ \text{t} \ 0
\]
and their logical disjunction is just:
\[
\text{lbp \ t} \ 0 \ .1. \ \text{predicate} \ (t^2 = 5) \ \text{t} \ 0
\]
3. Declarative reactivity. Many behaviors are naturally expressed in terms of reactions to events. But even these "reactive behaviors" have declarative semantics in terms of temporal composition, rather than an imperative semantics in terms of the state changes often employed in event-based formalisms. For example, a color-valued behavior that changes cyclically from red to green with each button press can be described by the following simple recurrence:
\[
\text{colorCycle} \ \text{t} \ 0 =
\begin{align*}
\text{red} & \quad \text{`untilIB`} \ \text{lbp} \ \text{t} \ 0 \ \Rightarrow \ \text{\{t1 \ -> \}} \\
\text{green} & \quad \text{`untilIB`} \ \text{lbp} \ \text{t} \ 1 \ \Rightarrow \ \text{\{t2 \ -> \}} \\
\text{colorCycle} \ \text{t} \ 2
\end{align*}
\]
(In Haskell, identifiers are made into infix operators by backquotes, as in \( `\text{untilIB}' \) e. Also, infix operators can be made into identifiers by enclosing them in parentheses, as in \( (+) \ x \ y \). Lambda abstractions are written as \( \backslash \vars \rightarrow \expr \).)
4. Polymorphic media. The variety of time-varying media (images, video, sound, 3D geometry) and parameters of these types (spatial transformations, colors, points, vectors, numbers) have their own type-specific operations (e.g., image rotation, sound mixing, and numerical addition), but fit into a common framework of behaviors and reactivity. For instance, the \( `\text{untilIB}' \) operation used above is polymorphic, applying to all types of time-varying values.
Our Contributions We have captured the four features above as a collection of recursive data types, functions, and primitive graphics routines in a system that we call Fran, for Functional Reactive Animation. Although these data types and functions do not form a programming language in the usual sense, we provide them with a formal denotational semantics, including a proper treatment of real time, to allow precise, implementation-independent reasoning. This semantics includes a CPO of real time, whose approximations allow us to reason about events before they occur. As would be true of a new programming language, the denotational semantics has been extremely useful in designing Fran. All of our design decisions begin with an understanding of the formal semantics, followed by reflecting the semantics in the implementation. (The semantics is given in Section 2.)
Perhaps the most novel aspect of Fran is its implicit treatment of time. This provides a great deal of expressiveness to the multimedia programmer, but also presents interesting challenges with respect to both formal semantics and implementation. In particular, events may be specified in terms of boolean functions of continuous time. These functions may become true for arbitrarily brief periods of time, even instantaneously, and so it is challenging for an implementation to detect these events. We solve this problem with a robust and efficient method for event detection based on interval analysis. (Implementation issues are discussed in Section 4.)
Specifically, the nature of an event can be exploited to eliminate search over intervals of time in which the event provably does not occur, and focus instead on time intervals in which the event may occur. In some cases, such as a collection of bouncing balls, exact event times may be determined analytically. In general and quite frequently, however, analytic techniques fail to apply. We describe instead an algorithm for event detection based on interval analysis and relate it to the partial information structure on the CPO of event times.
2 The Formal Semantics of Fran
The two most fundamental notions in Fran are behaviors and events. We treat them as a pair of mutually recursive polymorphic data types, and specify operations on them via a denotational semantics. (The "media types" we often use with events and behaviors will be treated formally in a later paper; but see also [7].)
2.1 Semantic Domains
The abstract domain of time is called \textit{Time}. The abstract domains of polymorphic behaviors (\( \alpha \text{-behaviors} \)) and polymorphic events (\( \alpha \text{-events} \)) are denoted \( \textit{Behavior}_\alpha \) and \( \textit{Event}_\alpha \), respectively.
Most of our domains (integers, booleans, etc.) are standard, and require no explanation. The \emph{Time} domain, however, requires special treatment, since we wish values of time to include partial elements. In particular, we would like to know that a time is "at least" some value, even if we don’t yet know exactly what the final value will be. To make this notion precise, we define a domain (pointed CPO) of time as follows:
Denote the set of real numbers as $\mathbb{R}$, and include in that set the elements $\infty$ and $-\infty$. This set comes equipped with the standard arithmetic ordering $\leq$, including the fact that $-\infty \leq x \leq \infty$ for all $x \in \mathbb{R}$.
Now define \emph{Time} $= \mathbb{R} \cup \mathbb{R}$, where elements in the second "copy" of $\mathbb{R}$ are distinguished by prefixing them with $\geq$, as in $\geq42$, which should be read: "at least 42." Then define $\downarrow \text{Time} = \geq(-\infty)$, and the domain (i.e. information) ordering on \emph{Time} by:
\[
\begin{align*}
x \sqsubseteq x, & \quad \forall x \in \mathbb{R} \\
\geq x \sqsubseteq y & \quad \text{if } x \leq y \quad \forall x, y \in \mathbb{R} \\
\geq x \sqsubseteq \geq y & \quad \text{if } x \leq y \quad \forall x, y \in \mathbb{R}
\end{align*}
\]
It is easy to see that $\downarrow \text{Time}$ is indeed the bottom element. Also note that a limit point $y$ is just the LUB of the set of partial elements (“pre-times”) that approximate it:
\[
y = \bigsqcup \{ x \mid x \leq y \}
\]
Since the ordering on the domain \emph{Time} is chain-like, and every such chain has a LUB (recall that $\mathbb{R}$ has a top element $\infty$), the domain \emph{Time} is a pointed CPO. This fact is necessary to ensure that recursive definitions are well defined.
Elements of \emph{Time} are most useful for approximating the time at which an event occurs. That is, an event whose time is approximately $\geq t$ is one whose actual time of occurrence is greater than $t$. Note that the time of an event that never occurs is just $\infty$, the LUB of $\mathbb{R}$.
Finally, we extend the definition of arithmetic $\leq$ to all of \emph{Time} by defining its behavior across the subdomains as follows:
\[
x \leq \geq y \quad \text{if } x \leq y
\]
This can be read: “The time $x$ is less than or equal to a time that is at least $y$, if $x \leq y$.” ($\geq x \leq y$ and $\geq x \leq \geq y$ are undefined.) We can easily show that this extended definition of type $\text{Time} \rightarrow \text{Time} \rightarrow \text{Bool}$ is continuous with respect to $\sqsubseteq$. It is used in various places in the semantics that follows.
\section{Semantic Functions}
We define an interpretation of $\alpha$-behaviors as a function from time to $\alpha$-values, producing the value of a behavior $b$ of a time $t$.
\[
\text{at} : \text{Behavior}_\alpha \rightarrow \text{Time} \rightarrow \alpha
\]
Next, we define an interpretation on $\alpha$-events as simply non-strict $\text{Time} \times \alpha$ pairs, describing the time and information associated with an occurrence of the event.
\[
\text{occ} : \text{Event}_\alpha \rightarrow \text{Time} \times \alpha
\]
Now that we know the semantic domains we are working with, we present the various behavior and event combinators with their formal interpretations.
\subsection{2.2 Semantics of Behaviors}
Behaviors are built up from other behaviors, static (non-time-varying) values, and events, via a collection of constructors (combinators).
\textbf{Time.} The simplest primitive behavior is \emph{time}, whose semantics is given by:
\[
\text{time} : \text{Behavior}_{\text{Time}} \\
\text{at}[\text{time}] t = t
\]
Thus at $[\text{time}]$ is just the identity function on \emph{Time}.
\textbf{Lifting.} We would like to have a general way of "lifting" functions defined on static values to analogous functions defined on behaviors. This lifting is accomplished by a (conceptually infinite) family of operators, one for each arity of functions.
\[
lift_n : (\alpha_1 \rightarrow \ldots \rightarrow \alpha_n \rightarrow \beta) \rightarrow \text{Behavior}_{\alpha_1} \rightarrow \ldots \rightarrow \text{Behavior}_{\alpha_n} \rightarrow \text{Behavior}_{\beta} \\
\text{at} [[\lift_n f b_1 \ldots b_n]] t = f (\text{at}[b_1] t) \ldots (\text{at}[b_n] t)
\]
Note that constant value lifting is just $\lift_0$.
\textbf{Notational aside:} In practice, lifting is needed quite frequently, so it would be inconvenient to make lifting explicit everywhere. It is more desirable to use familiar names like "$\sin$", "$\cos$", "$+\$", "$\times\$", and even literals like "$3\$" and "$\text{blue}\$", to refer to lifted versions of their standard interpretations. For instance, a literal such as 42 should behave as the constant behavior "$\lift_0 42\$," and a summation on behaviors such as "$\lift_2 b_1 + b_2\$" should behave as "$\lift_2 (+) b_1 b_2\$", where "$\text{(+)}\$" is curried addition. In our implementation of Fran in Haskell, type classes help considerably here, since the \textbf{Num} class provides a convenient implicit mechanism for lifting numerical values. In particular, with a suitable instance declaration, we achieve exactly the interpretations above, even for literal constants.
\textbf{Time transformation.} A \textit{time transform} allows the user to transform local time-frames. It thus supports what we call \textit{temporal modularity} for behaviors of all types. (Similarly, 2D and 3D transforms support \textit{spatial modularity} in image and geometry behaviors.)
\[
\text{timeTransform} : \text{Behavior}_\alpha \rightarrow \text{Behavior}_{\text{Time}} \rightarrow \text{Behavior}_\alpha \\
\text{at} [[\text{timeTransform } h b]] = \text{at}[h] \circ \text{at}[b]
\]
Thus note that \textit{time} is an identity for \textit{time Transform}.
\[
\text{timeTransform } b \text{ time} = b
\]
As examples of the use of time transformation in Fran, the expression:
\[
\text{timeTransform } b \text{ (time/2)}
\]
slows down the animation $b$ by a factor of 2, whereas:
\[
\text{timeTransform } b \text{ (time - 2)}
\]
delays it by 2 seconds.
Integration. Integration applies to real-valued as well as 2D and 3D vector-valued behaviors, or more generally, to vector-spaces (with limits). Borrowing from Haskell’s type class notation to classify vector-space types:
\[ \text{integral} : \text{VectorSpace} a \Rightarrow \text{Behavior}_a \rightarrow \text{Time} \rightarrow \text{Behavior}_a \]
\[ \text{at} [\text{integral}_a b] t = \int_0^t \text{at}[b] \]
Integration allows the specification of velocity behaviors, and, in case of the time and value. These different operator symbols are somewhat mnemonic:
\[ \text{Event handling.} \]
In order to give examples using specific kinds of events, we first describe the notion of event handlers, which are applied to the time and data associated with an event using the following operator:
\[ (\rightarrow) : \text{Event}_a \rightarrow (\text{Time} \rightarrow a \rightarrow \beta) \rightarrow \text{Event}_\beta \]
\[ \text{oce}[e] \equiv \int = (t, f t x) \]
Note that the inequality used here, \( t < t_i \), is the one defined in Section 2.1. In the next section examples of reactivity are given for each of the various kinds of events.
2.3 Semantics of Events
Event handling. In order to give examples using specific kinds of events, we first describe the notion of event handlers, which are applied to the time and data associated with an event using the following operator:
\[ (\Rightarrow) : \text{Event}_a \rightarrow (\text{Time} \rightarrow a \rightarrow \beta) \rightarrow \text{Event}_\beta \]
\[ \text{oce}[f] \equiv \int = (t, f t x) \]
For convenience, we will also make use of the following derived operations, which ignore the time or the data or both:
\[ (\Rightarrow) : \text{Event}_a \rightarrow (a \rightarrow \beta) \rightarrow \text{Event}_\beta \]
\[ (\Rightarrow) : \text{Event}_a \rightarrow (\text{Time} \rightarrow \beta) \rightarrow \text{Event}_\beta \]
\[ \text{oce}[e] \equiv \int = (t, f t x) \]
For these different operator symbols are somewhat mnemonic: \((\Rightarrow)\) receives all of the parameters, \((\Rightarrow)\) receives none of the parameters, \((\Rightarrow)\) receives only the time, and \((\Rightarrow)\) receives only the data.
Constant events. The simplest kind of event is one specified directly by its time and value.
\[ \text{constEv} : \text{Time} \rightarrow a \rightarrow \text{Event}_a \]
\[ \text{oce}[\text{constEv} t x] = (t, x) \]
These different operator symbols are somewhat mnemonic: \((\Rightarrow)\) receives all of the parameters, \((\Rightarrow)\) receives none of the parameters, \((\Rightarrow)\) receives only the time, and \((\Rightarrow)\) receives only the data.
External events. For this paper we consider only one kind of external event—mouse button presses—which can be from either the left or right button. The value associated with a button press event is the corresponding button release event, which in turn yields a unit value ((\()\) is the unit type):
\[ \text{lbp, rbp} : \text{Time} \rightarrow \text{Event} \]
\[ \mbox{The meaning of an event lbp \( l_0 \), for example, is the pair \((t, e)\), such that \( t < l_0 \) is the time of the first left button press after \( l_0 \), and \( e \) is the event corresponding to the first left button release after \( t \). Thus the behavior:} \]
\[ b_1 \quad \text{untilB} (\text{lbp} l_0) \Rightarrow \lambda e. \]
\[ b_2 \quad \text{untilB} e \Rightarrow \]
exhibits behavior \( b_1 \) until the left button is pressed, at which point it becomes \( b_2 \) until the left button is released, at which point it becomes \( b_3 \).
Predicates. It is natural to want to specify certain events as the first time that a boolean behavior becomes true after a given time.
\[ \text{predicate} : \text{Behavior}_{\text{Bool}} \rightarrow \text{Time} \rightarrow \text{Event} \]
\[ \text{oce}[\text{predicate} b l_0] = (\inf \{ t > l_0 : \text{at}[b][t] \}, ()) \]
That is, the time of a predicate event is the infimum of the set of times greater than \( l_0 \) at which the behavior is true. Note that this time could be equal to \( l_0 \). The behavior:
\[ b_1 \quad \text{untilB} (\text{predicate} (\text{sin} \text{ time} = 0.5) l_0) \Rightarrow b_2 \]
Thus exhibits \( b_1 \) until the first time \( t \) after \( l_0 \) that \( \text{sin} t \) is 0.5, after which it exhibits \( b_2 \). If the boolean behavior argument to \( \text{predicate} \) were an arbitrarily complex computable function, then \( \text{predicate} \) would not be computable. To cope with this problem, we restrict behaviors somewhat, to make \( \text{predicate} \) not only computable, but also efficient. We will return to this issue in Section 4.2.
Choice. We can choose the earlier of two events with the \( \downarrow \) operator:
\[ \downarrow : \text{Event}_a \rightarrow \text{Event}_a \rightarrow \text{Event}_a \]
\[ \text{oce}[\downarrow e f] = \begin{cases} (t, x) & \text{if } \text{at}[t][f] \text{ is undefined} \\ (t', x') & \text{otherwise} \end{cases} \]
where \((t, x) = \text{oce}[e]\)
\([t', x'] = \text{oce}[f]\)
For example, we can choose the following behavior:
\[ b_1 \quad \text{untilB} (\text{lbp} l_0 \downarrow \text{predicate} (\text{time} > 5) l_0) \Rightarrow b_2 \]
waits for either a left button press or a timeout of 5 seconds before switching from behavior \( b_1 \) to behavior \( b_2 \). As an alternative, the following example changes to a different behavior, \( b_3 \), upon timeout.
\[ b_1 \quad \text{untilB} (\text{lbp} l_0 \Rightarrow b_2 \downarrow \text{predicate} (\text{time} > 5) l_0 \Rightarrow b_3) \]
Snapshot. At the moment an event occurs it is often convenient to take a "snapshot" of a behavior's value at that point in time.
\[
\text{snapshot : Event} \to \text{Behavior} \Rightarrow \text{Event} \times \text{Behavior}
\]
\[
\text{oce}[\text{snapshot} t x] = (t, x, \text{at} [t, x])
\]
For example, the behavior:
\[
b_1 \text{ unless } \text{lp t} = \text{snapshot} (\sin \text{ time}) \Rightarrow \lambda (e, y). b_2
\]
grabs the sine of the time at which the left button is pressed, finds it to y, and continues with behavior \(b_2\) which presumably depends on \(y\). Although this example could also be achieved by grabbing the time of the left button press event and computing its sine, in general the behavior being snapshoted can be arbitrarily complex, and may in fact be dependent on external events.
Event sequencing. It is sometimes useful to use one event to generate another. The event \(\text{joinEv}\) is the event that occurs when \(e\) occurs, where \(e\) is the value part of \(c\).
\[
\text{joinEv : Event Event} \Rightarrow \text{Event}
\]
\[
\text{oce}[\text{joinEv} e] = \text{oce}[\text{snd} (\text{oce}[e])]
\]
(This function is so named because it is the "join" operator for the Event monad [22].)
For example, the event
\[
\text{joinEv} (\text{lp t} \Rightarrow \text{predicate} (b = 0))
\]
occurs the first time that the behavior \(b\) has the value zero after the first left button press after time \(t_0\).
3 Some Larger Examples
The previous section presented the primitive combinators for behaviors and events, along with their formal semantics. The following examples illustrate the use of some of these combinators. The examples are given as Haskell code, whose correspondence to the formal semantics should be obvious. (All values in these examples are behaviors, though we do not explicitly say so.)
To begin, let's define a couple of simple utility behaviors. The first varies smoothly and cyclically between -1 and +1.
\[
\text{wiggle} = \sin (\pi * \text{time})
\]
Using \(\text{wiggle}\) we can define a function that smoothly varies between its two argument values.
\[
\text{wiggleRange lo hi} = \text{lo} + (\text{hi-lo}) * (\text{wiggle}+1)/2
\]
Now let's create a very simple animation: a red, pulsating ball.
\[
\text{pBall} = \text{withColor red} (\text{bigger} (\text{wiggleRange} 0.5 1) \text{circle})
\]
The function \(\text{bigger}\) scales its second argument by the amount specified by its first argument; since the first argument is a behavior, the result is also a behavior, in this case a ball whose size varies from full size to half its full size.
A key attribute of Fran is that behaviors are composable. For example, \(\text{pBall}\) can be further manipulated, as in:
\[
\text{rBall} = \text{move} (\text{vectorPolar} 2.0 \text{time}) (\text{bigger} 0.1 \text{pBall})
\]
which yields a ball moving in a circular motion with radius 2.0 at a rate proportional to time. The ball itself is the same as \(\text{pBall}\) (red and pulsating), but 1/10 the original size.
Certain external phenomena can be treated as behaviors, too. For example, the position of the mouse can naturally be thought of as a vector behavior. Thus to cause an image to track exactly the position of a mouse, we all need to do is:
\[
\text{followMouse im t} = \text{move} (\text{mouse t}) \text{im}
\]
(The function \(\text{move}\) shifts an image by an offset vector.)
Another natural way to define an animation is in terms of rates. For example, we can expand on the mouse-follower idea by having the image follow the mouse at a rate that is dependent on how far the image is from the current mouse position.
\[
\text{followMouseRate} \text{im t} = \text{move offset} \text{im}
\]
\[
\text{where offset} = \text{integral rate t}
\]
\[
\text{rate} = \text{mouse t} \cdot . . \text{pos}
\]
\[
\text{pos} = \text{origin2} \cdot . . \text{offset}
\]
Note the mutually recursive specification of offset, rate, and pos: The offset starts out as the zero vector, and grows at a rate called rate. The rate is defined to be the difference between the mouse's location (mouse is a primitive behavior that represents mouse position) and our animation's position pos. pos, in turn, is defined in terms of the offset relative to the origin. As a result, the given image always pursues the mouse, but moves faster when the distance is greater. (The operation \(\cdot . .\) adds a point and a vector, yielding a point, and \(\cdot . .\) subtracts two points, yielding a vector.)
As a variation, we can virtually attach the image to the mouse cursor using a spring. The definition is very similar, with position defined by a starting point and a growing offset. This time, however, the rate is itself changing at a rate we call accel. This acceleration is defined in part by the difference between the mouse position and the image's position, but we also add some drag that tends to slow down the image by adding an acceleration in the direction opposite to its movement. (Increasing or decreasing the "drag factor" of 0.5 below creates more or less drag.)
\[
\text{followMouseSpring} \text{im t} = \text{move offset} \text{im}
\]
\[
\text{where offset} = \text{integral rate t}
\]
\[
\text{rate} = \text{integral accel t}
\]
\[
\text{accel} = (\text{mouse t} \cdot . . \text{pos}) - 0.5 * \text{ rate}
\]
\[
\text{pos} = \text{origin2} \cdot . . \text{offset}
\]
(The operator \(*\) multiplies a real number by a vector, yielding a vector.)
As an example of event handling, the following behavior describes a color that changes between red and blue each time the left button is pressed. We accomplish this change with the help of a function \(\text{cycle}\) that takes two colors, \(c_1\) and \(c_2\), and gives an animated color that starts out as \(c_1\). When the button is pressed, it swaps \(c_1\) and \(c_2\) and repeats (using recursion).
\[
\text{anim2 t} = \text{withColor} (\text{cycle red blue t}) \text{circle}
\]
where \(\text{cycle} \: c_1 \: c_2 \: t =
\]
\[
c_1 \cdot \text{"untill\"} \: \text{lp t} \Rightarrow \text{cycle} \: c_2 \: c_1
\]
bounce \text{minVal} \text{maxVal} \ y0 \ v0 \ g \ t0 = \text{path}
\text{where} \text{path} = \text{start} t0 \ (y0,v0)
\text{start} t0 \ (y0,v0) = y \text{'}untilB\text{'} \text{doBounce} \Rightarrow \text{start}
\text{where} y = \text{lift0} \ y0 + \text{integral} \ v t0
v = \text{lift0} \ v0 + \text{integral} \ g t0
\text{reciprocity} = 0.8
doBounce : : \text{Event} \ (\text{RealVal}, \text{RealVal}) \text{-- returns new} \ y \text{and} \ v
doBounce = (\text{collide} \text{'}snapshot\text{'} \text{pair}B \ y \ v) \Rightarrow \text{snd} \Rightarrow \ (\text{yHit}, \text{vHit}) \Rightarrow
(\text{yHit}, \text{reciprocity} \ * \ \text{vHit})
collide = \text{predicate} (y <\Rightarrow \text{lift0} \ \text{minVal} \ & \ & \ v <\Rightarrow \text{lift0} \ \text{maxVal}) \Rightarrow
y \Rightarrow \text{lift0} \ \text{maxVal} \ & \ y \Rightarrow \text{0} \ \text{t0}
Figure 1: One-Dimensional Bounce
Note that the \text{Time} argument in the recursive call to \text{cycle}
is supplied automatically by \text{'untilB'}.
The next example is a number-valued behavior that starts out as zero, and becomes \text{-1} while the left button is pressed or \text{1} while the right button is pressed.
\text{bSign} t0 =
0 \text{'untilB'} \ \text{lb} \ t0 \Rightarrow \text{nonZero} \ (-1) .1.
\text{rb} \ t0 \Rightarrow \text{nonZero} \ 1
\text{where} \text{nonZero r stop} =
\text{r \text{'untilB'} stop} \Rightarrow \text{bSign}
We can use the function \text{bSign} above to control the rate of growth of an image. Pressing the left (or right) button causes the image to shrink (or grow) until released. Put another way, the rate of growth is \text{0}, \text{-1}, or \text{1}, according to \text{bSign}.
grow im t0 = \text{bigger size im}
\text{where} size = 1 + \text{integral} \ rate \ t0
rate = \text{bSign} t0
A very simple modification to the grow function above causes the image to grow or shrink at the rate of its own size (i.e. exponentially).
grow' im t0 = \text{bigger size im}
\text{where} size = 1 + \text{integral} \ rate \ t0
rate = \text{bSign} t0 * \text{size}
Here's an example that demonstrates that even \text{colors} can be animated. Using the function \text{rgb}, a color behavior is created by fixing the blue component, but allowing the red and green components to vary with time.
\text{withColor} (\text{rgb} (\text{abs} (\text{cos} \ \text{time})))
(\text{abs} (\text{sin} (2*\text{time})))
0.5
\text{circle}
As a final example, let's develop a modular program to describe “bouncing balls.” First note that the physical equations describing the position \( y \) and velocity \( v \) at time \( t \) of an object being accelerated by gravity \( g \) are:
\[ y = y_0 + \int_{t_0}^{t} v \, dt \]
\[ v = v_0 + \int_{t_0}^{t} g \, dt \]
where \( y_0 \) and \( v_0 \) are the initial position and velocity, respectively of the object at time \( t_0 \). In Fran these equations are simply:
\[ y = \text{lift0} \ y0 + \text{integral} \ v t0 \]
\[ v = \text{lift0} \ v0 + \text{integral} \ g t0 \]
Next we define a function \text{bounce} that, in addition to computing the position of an object based on the above equations, also determines when the ball has hit either the floor or the ceiling, and if so reverses the direction of the ball while reducing its velocity by a certain \text{reciprocity}, to account for loss of energy during the collision. The code for \text{bounce} is shown in Figure 1. Note that collision is defined as the moment when either the position has exceeded the \text{minVal} and the velocity is negative, or the position has exceeded the \text{maxVal} and the velocity is positive. When such a collision is detected, the current position and velocity are snapshot, and the cycle repeats with the velocity negated and scaled by the reciprocity factor. (The various operators with \* after them are lifted versions of the underlying operators.)
Now that \text{bounce} is defined, we can also use it to describe \text{horizontal} movement, using \text{0} for acceleration. Thus to simulate a bouncing ball in a box, we can simply write:
\text{moveXY x y}
(\text{withColor} \ \text{green} \ \text{circle})
\text{where}
\text{x} = \text{bounce} *\text{Min} \ \text{xMax} \ x0 \ \text{vx0} \ 0 \ \text{t0}
\text{y} = \text{bounce} *\text{Min} \ \text{yMax} \ y0 \ \text{vy0} \ g \ \text{t0}
where \text{xMin}, \text{xMax}, \text{yMin}, and \text{yMax} are the dimensions of the box.
4 Implementation
The formal semantics given in Section 2 could almost serve as an implementation, but not quite. In this section, we describe the non-obvious implementation techniques used in Fran. One relatively minor item is integration. While symbolic integration could certainly be used for simple behaviors, we have instead adapted standard textbook numerical techniques. (We chiefly use fourth order Runge Kutta [17].)
4.1 Representing Behaviors
An early implementation of Fren represented behaviors as implied in the formal semantics:
```
data Behavior a = Behavior (Time -> a)
```
This representation, however, leads to a serious inefficiency. To see why, consider a simple sequentially recursive reactive behavior like the following.
```
b = toggle True 0
where toggle val t0 =
lift0 val 'untilB' lbp t0 =>
toggle (not val)
```
This behavior toggles between true and false whenever the left button is pressed. Suppose \( b \) is sampled at a time \( t \); after the first button press, and we then need to sample \( b \) at a time \( t_2 > t \). Then \( b \) needs to notice that \( t_2 \) is after the first button press, and then see whether it is also beyond the second button press. After \( n \) such events, sampling must verify that their given times are indeed past \( n \) events, so the running time and the (lazily expanded) representation would be \( O(n) \). One could try to eliminate this “space-time leak” by switching to a stateful implementation, but doing so would interfere with a behavior’s ability to support multiple simultaneously time-transformed versions of itself.
We solve this problem by having behavior sampling generate not only a value, but also a new, possibly simpler, behavior.
```
data Behavior a =
Behavior (Time -> (a, Behavior a))
```
(In fact, we use a slightly more complex representation, as explained in Section 4.2 below.) Once an event is detected to be \( (t, b') \), the new behavior is sampled and the resulting value and possibly an even further simplified version are returned. In most cases (ones not involving time transform), the original \( \text{untilB} \) behavior is then no longer accessible, and so gets garbage collected. Note that this optimization implies some loss of generality: sampling must be done with monotonically non-decreasing times.
These same efficiency issues apply as well to integration, eliminating the need to re-start integration for each sampling. (In fact, our formulation of numerical integration is as sequentially recursive reactive behaviors.)
4.2 Implementing Events
There are really two key challenges with event detection: (a) how to avoid trying too soon to catch events, and (b) how to catch events efficiently and robustly when we need to. We use a form of laziness for the former challenge, and a technique called \( \text{interval analysis} \) for the latter.
Representing events lazily. Recall the semantics of reactivity:
\[
\begin{align*}
\text{untilB} : \text{Behavior} \times \text{EventBehavior} & \to \text{Behavior}, \\
\text{at} \[t] \text{untilB} \[t,t'] & = \text{if } \{t \leq t'\} \text{ then } \text{at} \[t] \text{else at} \[t'] \], \\
& \text{where } (t, b) = \text{occ} \[t]
\end{align*}
\]
Note that values of an \( \text{untilB} \)-based behavior at \( t \leq t_0 \) do not depend on the precise value of \( t_0 \), just the \( \text{partial information} \) about \( t_0 \) that it is at least \( t \). This observation is crucial, because it may be quite expensive or, in the case of user input, even impossible to know the value of \( t_0 \) before the time \( t_0 \) arrives. Instead, we represent the time \( t_0 \) by a chain of lower-bound time values increasing monotonically with respect to the information ordering defined in Section 2.1. Because these chains are evaluated \( \text{lazily} \), detection is done progressively on demand.
Detecting predicate events. The second implementation challenge raised by events is how to determine when \( \text{predicate} \) events occur. For instance, consider the event that occurs when \( t \in \mathbb{N} = 10 \):
\[
\text{predicate } (t \cdot \exp (4 \cdot t \cdot \text{time}) == 10)
\]
Any technique based solely on sampling of behaviors must fail to detect events like this whose boolean behaviors are true only instantaneously. An alternative technique is symbolic equation solving. Unfortunately, except for very simple examples, equations cannot be solved symbolically.
The technique we use to detect predicate events is \( \text{interval analysis} \) \((IA) [20]\). It uses more information from a behavior than can be extracted purely through sampling, but it does not require symbolic equation solving. Instead, every behavior is able not only to tell how a sample time maps to a sample value, but also to produce a conservative interval bound on the values taken on by a behavior over a given interval \( I \). More precisely, the operation \( \text{during} \) mapping time intervals to \( n \) intervals, has the property that \( \text{at}(t \in \text{during}(I)) \) for any \( a \)-valued behavior \( b \), time interval \( I \), and time \( t \in I \).
An interval is represented simply as a pair of values:
```
data Iv1 a = a `\textquote{\textit{Upto}}` a
```
For instance, \( '\text{Upto' 10}' \) represents the interval \([3,10]\), i.e., the set of \( x \) such that \( 3 \leq x \leq 10 \). The implementation of a behavior then contains both the time-sampling and interval-sampling functions:
```
data Behavior a =
Behavior (Time -> (a, Behavior a))
(Iv1 Time -> (Iv1 a, Behavior a))
```
As an example, the behavior time maps times and time intervals to themselves, and returns an unchanged behavior.
\[
\text{time :: Behavior Time}
\]
\[
\text{time = Behavior } \lambda t \to (t, \text{time})
\]
\[
\text{and iv \to (iv, time)}
\]
“Lifting” of functions to the level of behaviors works similarly to the description in Section 2, but additionally maps domain intervals to range intervals, and re-applies the lifted functions to possibly altered behavior arguments. For instance, \( \text{lift}_2 \) is implemented as follows.
\[
\text{lift2 f fi b1 b2 = Behavior sample isample where sample t = (f x1 x2, lift2 f fi b1 b2') where (x1, b1') = b1 \text{‘at’} t}
\]
\[
(x2, b2') = b2 \text{‘at’} t
\]
\[
isample iv = (f1 x1 i1 x2, lift2 f fi b1 b2') where (x1i, b1') = b1 \text{‘during’} iv
\]
\[
(x2i, b2') = b2 \text{‘during’} iv
\]
The restriction on behaviors referred to in Section 2.3 that makes event detection possible, is that behaviors are composed of functions \( f \) for which a corresponding \( \text{fi is} \)
known in the $lif_{n}$ functions. (These $\triangledown$ are called “inclusion functions.”)
Defining functions’ behaviors over intervals is well-understood [20], and we omit the details here, other than to point out that Haskell’s type classes once again provide a convenient notation for interval versions of the standard arithmetic operators. For example, evaluating
$$(2 \ ' \text{Upto} \ ' 4) + (10 \ ' \text{Upto} \ ' 30)$$
yields the interval $[12,34]$. Also, a useful IA technique is to exploit intervals of monotonicity. For instance, the expression function is monotonically increasing, while sin and cos functions change between monotonically increasing and monotonically decreasing on intervals of width $\pi$.
We can also apply IA to boolean behaviors, if we consider booleans to be ordered with $\text{False} < \text{True}$. There are three nonempty boolean intervals, corresponding to the behavior being true never, sometimes, or always. For example, the interval form of an equality check whether its two interval arguments overlap. If not, the answer is uniformly false. If both intervals are the same singleton interval, then the answer is uniformly true. Otherwise, IA only knows that the answer may be true or false throughout the interval. Specifically:
$$(\text{lo1} \ ' \text{Upto} \ ' \text{hi1}) \subseteq (\text{lo2} \ ' \text{Upto} \ ' \text{hi2})$$
<table>
<thead>
<tr>
<th>$\text{hi1} < \text{lo2} \</th>
<th>\ \text{hi2} < \text{lo1}$</th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>False</td>
</tr>
</tbody>
</table>
| $\text{lo1} = \text{hi1} \ & \ & \text{lo2} = \text{hi2} \ & \ & \text{lo1} = \text{lo2}$ |
|-----------------|-----------------|
| False | True |
<table>
<thead>
<tr>
<th>otherwise</th>
</tr>
</thead>
<tbody>
<tr>
<td>False</td>
</tr>
</tbody>
</table>
Similarly, it is straightforward to define interval versions of the inequality operators and logical operators (conjunction, disjunction, and negation).
With this background, detection of predicate events through IA is straightforward. Given a start time $t_1$, choose a time $t_2 > t_1$, and evaluate the boolean behavior over $[t_1, t_2]$, yielding one of the three boolean intervals listed above. If the result is uniformly false, then $t_2$ is guaranteed to be a lower bound for the event time. If uniformly true, then the event time is $t_1$ (which is the infimum of times after $t_1$). Otherwise, the interval is split in half, and the two halves are considered, starting with the earlier half (because we are looking for the first time the boolean behavior is true). At some point in this recursive search, the interval being divided becomes smaller than the desired degree of temporal accuracy, at which point event detection claims a success.
This event detection algorithm is captured in the definition of predicate given in Appendix A. This function uses the above divide-and-conquer strategy in narrowing down the interval, but also, a double-and-conquer strategy in searching the right-unbounded time interval. The idea that if the event was not found in the next $w$ seconds, then perhaps we should look a bit further into the future—$2w$ seconds—the next time around.
It is also possible to apply IA to positional input user input. The idea is to place bounds on the rate or acceleration of the positional input, and then make a worst-case analysis based on these bounds. We have not yet implemented this idea.
5 Related Work
Henderson’s functional geometry [12] was one of the first purely declarative approaches to graphics, although it does not deal with animation or reactivity. Several other researchers have also found declarative languages well-suited for modeling pictures. Examples include [15, 23, 3, 10].
Arya used a lazy functional model to model 2D animation as lazy lists of pictures [1, 2], constructed using list combinators. While this work was quite elegant, the use of lists implies a discrete model of time, which is somewhat unnatural. Problems with a discrete model include the fact that time-scaling becomes difficult, requiring throwing away frames or interpolation between frames, and rendering an animation requires that the frame rate match the discrete representation; if the frames cannot be generated fast enough, the perceived animation will slow down. Our continuous model avoids these problems, and has the pleasant property that animations run at precisely the same speed, regardless of how fast the underlying hardware is (slower hardware will generate less smooth animations, but they will still run at the same rate).
The TBAG system modeled 3D animations as functions over continuous time, using a “behavior” type family [8, 19]. These behaviors are built up via combinators that are automatically invoked during solution of high level constraints. Because it used continuous time, TBAG was able to support derivatives and integrals. It also used the idea of elevating functions on static values into functions on behaviors, which we adopted. Unlike our approach, however, reactivity was handled imperatively, through constraint assertion and retraction, performed by an application program.
CML (Concurrent ML) formalized synchronous operations as first-class, purely functional, values called “events” [18]. Our event combinators “”, “$\langle \rangle$” and “$\ast\times\ast\ast$” correspond to CML’s choose and wrap functions. There are substantial differences, however, between the meaning given to “events” in these two approaches. In CML, events are ultimately used to perform an action, such as reading input from or writing output to a file or another process. In contrast, our events are used purely for the values they generate. These values often turn out to be behaviors, although they can also be new events, tuples, functions, etc.
Concurrent Haskell [14] extends the pure lazy functional programming language Haskell with a small set of primitives for explicit concurrency, designed around Haskell’s monadic support for I/O. While this system is purely functional in the technical sense, its semantics has a strongly imperative feel. That is, expressions are evaluated without side-effects to yield concurrent, imperative computations, which are executed to perform the implied side-effects. In contrast, modeling entire behaviors as implicitly concurrent functions of continuous time yields what we consider a more declarative feel.
Haskore [13] is a purely functional approach to constructing, analyzing, and performing computer music, which has much in common with Henderson’s functional geometry, even though it is for a completely different medium. The Haskore work also points out useful algebraic properties that such declarative systems possess. Other computer music languages worth mentioning include Canon [5], Fugue [6], and a language being developed at GRAME [16], only the latter of which is purely declarative. Fugue also highlights the utility of lazy evaluation in certain contexts, but extra effort is needed to make this work in its Lisp-based context, whereas
in a non-strict language such as Haskell it essentially comes “for free.”
DirectX Animation is a library developed at Microsoft to support interactive animation. Fran and DirectX Animation both grew out of the ideas in an earlier design called Active VRML [7]. DirectX Animation is used from more mainstream imperative languages, and so mixes the functional and imperative approaches.
There are also several languages designed around a synchronous data-flow notion of computation. The general-purpose functional language Lucid [21] is an example of this style of language, but more importantly are the languages Signal [11] and Lustre [4], which were specifically designed for control of real-time systems.
In Signal, the most fundamental idea is that of a signal, a time-ordered sequence of values. Unlike Fran, however, time is not a value, but rather is implicit in the ordering of values in a signal. By its very nature time is thus discrete rather than continuous, with emphasis on the relative ordering of values in a data-flow-like framework. The designers of Signal have also developed a clock calculus with which one can reason about Signal programs. Lustre is a language similar to Signal, rooted again in the notion of a sequence, and owing much of its nature to Lucid.
6 Conclusions
Writing rich, reactive animations is a potentially tedious and error-prone task using conventional programming methodologies, primarily because of the attention needed for issues of presentation. We have described a system called Fran that remedies this problem by concentrating on issues of modeling, leaving presentation details to the underlying implementation. We have given a formal semantics and described an implementation in Haskell, which runs acceptably fast using the Hugs interpreter. Future work lies in improving performance through the use of standard compilation methods as well as domain-specific optimization techniques; extending the ideas to 3D graphics and sound; and investigating other applications of this modeling approach to software development.
Our implementation of Fran currently runs under the Windows ’95/NT version of Hugs, a Haskell implementation being developed collaboratively by Yale, Nottingham, and Glasgow Universities. It is convenient for developing animation programs, because of quick turn-around from modification to execution, and it runs with acceptable performance, for a byte-code interpreter. We expect marked performance improvement once Fran is running under GHC (the Glasgow Haskell Compiler). Even better, when these two Haskell implementations are integrated, Fran programs will be convenient to develop and run fast. The Hugs implementation, which includes the entire Fran system, may be retrieved from http://www.haskell.org/hugs. Although this paper will give the reader an understanding of the technical ideas underpinning Fran, its power as an animation engine (and how much fun it is to play with!) can only be appreciated by using it.
Acknowledgements We wish to thank Jim Kajiyama for early discussions that stimulated our ideas for modeling reactivity; Todd Knoblock who helped explore these ideas as well as many other variations; John Peterson and Alastair Reid for experimental implementations; Philip Wadler for thoughtful comments that resulted in simplifying the semantic model; and Sigbjorn Finne for helping with the implementation of Fran. We also wish to acknowledge funding of this project from Microsoft Research, DARPA/AFOSR under grant number F30602-96-0-0232, and NSF under grant number CCR-9633990.
References
Appendix A: Haskell Code for Predicate Event Detection
```haskell
type BoolB = Behavior Bool
type TimeI = IvI Time
predicate :: BoolB -> TimeI -> Event ()
predicate cond t0 = predAfter cond t0 1
where
predAfter cond t0 width =
predIn cond (t0 'Upto' t0+width) (\ cond' ->
predAfter cond' (t0+width) (2*width))
predIn :: BoolB -> TimeI -> (BoolB -> Event ()) -> Event ()
predIn cond iv tryNext =
case valI of
False 'Upto' False ->
-- no occurrence
False 'Upto' True ->
-- Note lower bound and try the next condition.
True 'Upto' False ->
-- found at least one
True 'Upto' True ->
-- found exactly one
where
lo 'Upto' hi = iv
mid = (hi+lo)/2
ivLeftTrimmed = lo + leftSkipWidth 'Upto' hi
(valI,cond') = cond 'during' ivLeftTrimmed
timeIsAtLeast hi (tryNext cond')
False 'Upto' True ->
-- found at least one
True 'Upto' False ->
-- found at least one
True 'Upto' True ->
-- found exactly one
where
lo 'Upto' hi = iv
mid = (hi+lo)/2
ivLeftTrimmed = lo + leftSkipWidth 'Upto' hi
(valI,cond') = cond 'during' ivLeftTrimmed
False 'Upto' False
-- Note lower bound and try the next condition.
timeIsAtLeast hi (tryNext cond')
```
-- Interval size limit for temporal subdivision
eventEpsilon = 0.001 :: Time
-- Simulate left-open-ness via a small increment
leftSkipWidth = 0.0001 :: Time
|
{"Source-Url": "http://haskell.cs.yale.edu/wp-content/uploads/2011/02/icfp97.pdf", "len_cl100k_base": 13088, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 41746, "total-output-tokens": 15522, "length": "2e13", "weborganizer": {"__label__adult": 0.0003867149353027344, "__label__art_design": 0.0013027191162109375, "__label__crime_law": 0.00029659271240234375, "__label__education_jobs": 0.0006155967712402344, "__label__entertainment": 0.00015366077423095703, "__label__fashion_beauty": 0.00016379356384277344, "__label__finance_business": 0.00019347667694091797, "__label__food_dining": 0.00042724609375, "__label__games": 0.0007352828979492188, "__label__hardware": 0.0010004043579101562, "__label__health": 0.00042557716369628906, "__label__history": 0.00029730796813964844, "__label__home_hobbies": 0.00011491775512695312, "__label__industrial": 0.00042176246643066406, "__label__literature": 0.0003643035888671875, "__label__politics": 0.0002751350402832031, "__label__religion": 0.0005292892456054688, "__label__science_tech": 0.036468505859375, "__label__social_life": 8.398294448852539e-05, "__label__software": 0.00690460205078125, "__label__software_dev": 0.94775390625, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.0005044937133789062, "__label__travel": 0.00021708011627197263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55841, 0.01514]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55841, 0.79766]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55841, 0.85594]], "google_gemma-3-12b-it_contains_pii": [[0, 4225, false], [4225, 10239, null], [10239, 16387, null], [16387, 21989, null], [21989, 28111, null], [28111, 32979, null], [32979, 39222, null], [39222, 46218, null], [46218, 49800, null], [49800, 54199, null], [54199, 55841, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4225, true], [4225, 10239, null], [10239, 16387, null], [16387, 21989, null], [21989, 28111, null], [28111, 32979, null], [32979, 39222, null], [39222, 46218, null], [46218, 49800, null], [49800, 54199, null], [54199, 55841, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55841, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55841, null]], "pdf_page_numbers": [[0, 4225, 1], [4225, 10239, 2], [10239, 16387, 3], [16387, 21989, 4], [21989, 28111, 5], [28111, 32979, 6], [32979, 39222, 7], [39222, 46218, 8], [46218, 49800, 9], [49800, 54199, 10], [54199, 55841, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55841, 0.01931]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
2fe9384c80de6f14840c8c742fac84d2f13534e5
|
Security Automaton to Mitigate Laser-based Fault Attacks on Smart Cards
Guillaume Bouffard
XLIM UMR 7252 – University of Limoges,
123 Avenue Albert Thomas, 87060 Limoges CEDEX, France
E-mail: guillaume.bouffard@unilim.fr
Bhagyalekshmy N Thampi
University of Limoges,
123 Avenue Albert Thomas, 87060 Limoges CEDEX, France
E-mail: bhagyalekshmy.narayanan-thampi@xlim.fr
Jean-Louis Lanet
University of Limoges,
123 Avenue Albert Thomas, 87060 Limoges CEDEX, France
E-mail: jean-louis.lanet@unilim.fr
Abstract:
Security and attacks are two sides of the same coin in the smart card industry. Smart cards are prone to different types of attacks to gain access to the assets stored in it and that can cause security issues. It is necessary to identify and exploit these attacks and implement appropriate countermeasures to mitigate their effects. Fault attacks are one among them. They can introduce abnormal behaviour on the smart card environment. The redundancy is necessary to detect this change in their environment. In this work we propose an automatic method to obtain control flow redundancy using a security automaton to mitigate laser based fault attacks and hence implement a smart card countermeasure based on the combination of static analysis and dynamic monitoring method. This is a very cost effective approach which can identify and mitigate the effects of fault attacks in an efficient way.
Keywords: Smart Card; Laser Fault Attacks; Security Automata; Countermeasure
Biographical notes:
Guillaume Bouffard received his Master’s degree in Cryptology and IT-Security (CRYPTIS) from the University of Limoges in 2010. He worked as a research engineer in Smart Secure Devices (SSD) team at XLIM labs for 6 months on smart card physical security before starting his PhD in 2011. His thesis is on the possibilities and issues of laser beam attacks on Java Card virtual machine. His research interests include physical and logical attacks on embedded systems and smart cards.
Bhagyalekshmy N. Thampi received her engineering degree in Electronics and Communication from Anna University, India, and MSc. in Management of Embedded Electronic Systems from ESIGELEC, France. She worked as a
Bouffard, N Thampi and Lanet
research engineer in Smart Secure Devices (SSD) team at XLIM, France. Her research interests include smart card security and EMC/EMI.
Jean-Louis Lanet is a Professor at the Computer Science Department, University of Limoges since 2007. Prior to that, he was a senior researcher at Gemplus Research Labs (1996–2007). During this period he spent two years at INRIA (Sophia-Antipolis) (2003–2005) as an engineer and as a senior research associate in the Everest team. He started his career as a researcher at Elecma, Electronic division of the Snecma, now a part of the Safran group (1984–1995) and his field of research was on jet engine control, fault tolerant architecture and real time scheduling. Now his research interests include security of small systems like smart cards and software engineering.
1 Introduction
Smart card is an integrated chip with the smallest computing platform incorporating security at a system level. Smart cards find application in banking, SIM card, health insurance, electronic passports, etc. Data stored inside the card are extremely sensitive and requires to be protected from different types of hardware and software attacks. Attacks could be either purely logical which exploit software vulnerabilities or could be hardware type which abuse side channel vulnerability to access information about the protected security data/key or even cryptographic information. Logical attacks can be performed either by executing illegal instructions and/or accessing the secret information of a program. Hardware attacks can be realised using electromagnetic probes or laser beams. Among the hardware attacks, fault injection (FI) attacks using a laser beam is one of the most difficult to handle. It cause errors in the program execution, perturbations in the chip registers, bit flip, etc. which can be detected using some redundancies. Several investigations and approaches are proposed in various literatures and among them the use of security automaton and reference monitor are of larger interest. These techniques have emerged as a powerful and flexible method to enforce security policies over untrusted code.
In Bouffard et al. (2013), we presented the general approach of using security automaton in a smart card. In this article, we detail how this approach has been implemented into a Java Card Virtual Machine (JCVM) and the obtained metrics. We evaluated the approach thanks to applets provided by our industrial partners, in term of memory footprint and execution time to check if this approach could be affordable in an industrial context.
This paper is organised as follows: section two describes the security architecture of a Java based smart card. Section three explains the perturbation attacks especially FI attacks on smart cards and their effects on program execution. The known fault detection mechanisms and their comparison are discussed in the fourth section. Section five presents our contribution and countermeasure. Final section gives the conclusions of our work.
2 Security Architecture of a Java-Based Smart Card
The Java Card platform is a multi-application environment, where the sensitive data of an applet shall be protected against malicious access from another applet or from the
external world. To enforce protection between applets, classical Java technology uses the
type verification, class loader and security managers. In the smart card world, complying
with the traditional enforcement process is not possible. The type verification is performed
outside the card due to memory constraints. The Java Card platform provides further security
enhancements, such as transaction atomicity, cryptographic classes and the applet firewall.
The applet firewall replaces the class loader and security manager to enforce the sandbox
security model. The Java Card security is ensured inside and outside the card due to the
limited resources of the platform. During the conversion of the Java Class files, the semantics
of the program is checked and signed (Figure 1) outside the card.

For security reasons, the ability to download code into the card is controlled by a protocol
defined by GlobalPlatform (2011). This protocol ensures that the owner of the code has the
necessary authorisation to perform the action.
2.1 The Byte Code Verifier
Allowing code to be loaded into the card after post-issuance raises the same issues as
the web applets. An applet not built by a compiler (hand-made byte code) or modified
after the compilation step may break the Java sandbox model. Thus, the client must check
that the Java-language typing rules are preserved at the byte code level. Java is a strongly
typed language where each variable and expression has a type determined during the
compile-time, so that if a type mismatches arise from the source code, an error is thrown.
The Java byte code is also a strongly typed one. Moreover, local and stack variables of
the virtual machine have fixed types even in the scope of a method execution. None of the
type mismatches are detected during the run time which can allow the malicious applets
to exploit this issue. For example, pointers are not supported by the Java programming
language although they are extensively used by the Java Virtual Machine (JVM) where
object references from the source code are relative to a pointer. Thus the absence of pointers
reduces the number of programming errors. But it does not stop attempting to break security
protections with unfair uses of pointers.
The Byte Code Verifier (BCV) is an essential security component in the Java sandbox
model: any bug created by an ill-typed applet could induce a security flaw. The byte code
verification is a complex process involving an elaborate program analysis using a very
costly algorithm in terms of time consumption and memory usage. For these reasons, many
cards do not implement this kind of component and also it relies on the responsibility of the organisation which provides signature to ensure the code of the applet is well-typed.
2.2 The Java Card Firewall
The separation of different applets is enforced by a firewall which is based on the package structure of Java Card and the notion of the contexts. When an applet is created, the Java Card Runtime Environment (JCRE) uses a unique Applet IDentifier (AID) to link it with the package where it has been defined. If two applets are an instance of classes of the same Java Card package, they are considered to be in the same context. There is also a super user context called JCRE. Applets associated with this context can access the objects from any other contexts on the card.
Each object is assigned to a unique owner context, which is the context of the created applet. An object’s method is executed in the context of the instance. This context provides information which will or will not allow access to another object. The firewall prevents a method executing in one context from accessing any attribute or method of objects to another context.
There are two ways to bypass a firewall. One is through the JCRE entry points and the other one is by shareable objects. JCRE’s entry points are the objects owned by JCRE, specifically entitled as objects that can be accessed from any context. A significant example is an APDU buffer which contains the sent and received commands from the card. This object is managed by JCRE and in order to allow applets to access this object, it is designated as an entry point. Another example is the elements of the table containing the AIDs of the installed applets. Entry points can be marked as temporary. References to temporary entry points cannot be stored in objects and this rule is enforced by the firewall.
2.3 Execution Consistency
By nature, smart card involves in a hostile environment. Due to the fact that its power, clock and reset are provided by the external world, the card must be protected against any modification of these parameters. Software processes often rely on internal data consistency, and can have an erratic behaviour in case of power disruption.
Java Card introduces a transaction mechanism that guarantees atomicity. It makes sure that all the operations within a transaction is completed. At the end of each transaction, a commit command confirms the completion of the previous operations. If the transaction is aborted by the program or due to power shortage, the mechanism confirms that all the earlier operations within a transaction have set back to their previous state. In this way, it is possible to maintain the internal consistency of the related data.
2.4 The Sharing Mechanism
To support cooperative applications on one-card, the Java Card technology provides well-defined sharing mechanisms. The Shareable Interface Object (SIO) mechanism is a system in the Java Card platform meant for the collaboration of the applets. The javacard.framework package provides a tagging interface called Shareable interface and the methods described in the Shareable interface are available through
the firewall. Any server applet which provides services to other applets within the Java Card should define the exportable services in an interface tagged as shareable.
2.5 The CAP File
The CAP (Converted APplet) file format is based on the notion of components that contain specific information from the Java Card package. It is specified by Oracle (2011) which consists of eleven standard components: Header, Directory, Import, Applet, Class, Method, Static Field, Export, Constant Pool, Reference Location and Descriptor. The Debug component is only used for the debug process. Moreover, the targeted JCVM may support user’s custom components.
2.6 Synthesis
Smart card security is a complex problem with different perspectives, however, the products based on JCVM have passed the real-world security evaluations successfully for major industries around the world. Java Card is also a platform that has cleared high level security evaluations for issuance by banking associations and by leading government authorities. It has also achieved compliance with FIPS 140-1 certification scheme. Still, implementations have undergone several attacks, particularly perturbation attacks.
3 Perturbation Attacks on Smart Cards
In general, a fault is an event that changes the behaviour of a system such that the system no longer provides the expected service. It may not be only an internal event in the system, but also a change in the environment that causes a bit flip in the memory. However the fault, is the primary reason for the changes in the system that leads to an error which in turn causes a failure of the complete system. In order to avoid such a failure, faults have to be detected as early as possible and some actions must be carried out to correct or stop the service. Thus, it is necessary to analyse the errors generated by these faults more precisely.
3.1 Fault Attacks
Smart card is a portable device for which a smart card reader provides external power and clock sources to operate. The reader can be replaced with a specific equipment to perform the attacks. With short variations in the power supply, it is possible to induce errors on smart card’s internal operations. These perturbations are called spike attacks, which may induce errors in the program execution. Latter aims at confusing the program counter and it can cause the improper working of conditional checks, a decrease in loop counters and the execution of arbitrary instructions. A reader like Micropross MP300 can be used to provide a glitch attack. As described by Anderson & Kuhn (1997), Boneh et al. (1997), Joye et al. (1997), a glitch incorporates short deviations beyond the required tolerance from a standard signal bounds. It can be defined by a range of different parameters and can be used to inject memory faults as a faulty execution behaviour. Hence, the possible effects are the same as in spike attacks.
An idea to inject physical faults to shift the semantics of an application has been emerged recently. Based on the FI, an attacker can modify a part of the memory contents
or a signal on an internal bus since from an applet’s execution stage, which can lead to an exploitable deviant behaviour. So the application mutates and executes a malicious byte code that can break the security model. The fault attacks are used to attack cryptographic algorithm implementations as presented by Aumüller et al. (2002), Hemme (2004), Piret & Quisquater (2003).
Barbu et al. (2010) proposed a way to bypass the embedded smart card BCV. To accomplish that, a correct applet was installed which contains an unauthorised cast between two different objects. Statically, this applet is in compliance with the Java Card security rules. If a laser beam hits the bus in such a way that the cast type check instruction is not executed, this applet becomes a malware. Moreover, the authors were able to load applications into the targeted Java Card. The authors implemented three Java classes defined in the Listing 1. The first one is the class\texttt{A} which contains 255 byte fields type. The second one is the class\texttt{B} that has a short-integer field type and the last one is the class\texttt{C}, referred to an instance of \texttt{A}.
Listing 1: Classes used to create a type confusion.
```java
public class A {
byte b00, ..., bFF;
}
public class B {
short addr;
}
public class C {
A a;
}
```
A \texttt{checkcast} verification is done in (line 9) for the applet with the code shown in the Listing 2. This applet becomes a malware because a laser beam hits the bus in such a way that the \texttt{checkcast} instruction is temporally avoided. With this invalid cast, the authors succeeded to obtain a window which allows to access to the content of the smart card memories.
Listing 2: \texttt{CheckCastApplet} class.
```java
public class CheckCastApplet extends Applet {
B b; C c;
... // Constructor, install method, ...
public void process(APDU apdu) {
... switch (buffer[ISO7816.OFFSET_INS]) {
case INS_ILLEGAL_CAST:
try {
c = (C) ( (Object) b ); // Checkcast check
} catch (ClassCastException e) {
// Invalid cast is detected
} // more later defined instructions
}
}
```
An approach to disturb the Control Flow Graph (CFG) of an applet by injecting laser beam into the non-volatile memory of a smart card was proposed by Bouffard et al. (2011). This attack was performed on a \texttt{for} loop as described in the Listing 3. The byte code version of this loop is presented in the Listing 4. This attack can also be extended with other type of loop or condition.
Security Automaton to Mitigate Laser-based Fault Attacks on Smart Cards
Listing 3: A for loop sample.
```java
for (short i=0 ; i<n ; ++i)
foo = (byte) 0xBA;
bar = foo; foo = bar;
```
```
... // Few instructions
// for a better
// understanding
... bar = foo; foo = bar;
```
Listing 4: Associated byte codes of the loop listed in the Listing 3.
```
sconst_0
sstore_1
sload_1
sconst_1
if_scmpge_w 00 7C
aload_0
bspush BA
putfield_b 0
aload_0
getfield_b_this 0
putfield_b 1
// Few instructions
// are hidden here
// for a better
// understanding
aload_0
getfield_b_this 1
putfield_b 0
sinc 1 1
goto_w FF17
```
The Java Card specification defines two instructions to rebranch a loop, a `goto` and the `goto_w`. The first one branches with a 1-byte offset and the second one takes 2-byte offset. Since the smart card’s memory manager stores array data after the memory byte code, a laser fault on the high part of the `goto_w` parameter can shift the backward jump to a forward one and the authors succeeded to execute the contents of an array. However, the knowledge on Java Card’s internal reference is needed to execute a rich shellcode. Hamadouche et al. (2012) described a way to obtain Java Card’s API addresses embedded in the card. With this attack, it is possible to know the internal references of the Java Card.
Lancia (2012) exploited the Java Card instance allocator of JCRE based on high precision FI. Each instance created by the JCRE is allocated in a persistent memory. The Java Card specification Oracle (2011) provides some functions to create transient objects. The data of the transient object are stored in the RAM memory, but the header of this object is always stored in the persistent memory. On the modern Java Card using Memory Management Unit (MMU), references are represented by an indirect memory address. This address is an index to a memory address pool which in turn refers to a global instance pool managed by the virtual machine (Figure 2).
3.2 Fault Models
As shown by Bouffard et al. (2011), it is possible to induce a laser beam into the memory cells since the silicon layer of smart card chip is visible. These memory cells are found to be sensitive to light. Due to photoelectric effect, modern lasers can be focused on relatively small regions of a chip and dynamically modify the execution flow as explained by Barbu (2012).
It is necessary to know the effects of a fault attack on smart cards to detect it. Fault models have been already discussed in details by Blömer et al. (2003), Wagner (2004).
Using the precise bit error model, an attack was described by Skorobogatov & Anderson (2002). But it is not realistic on current smart cards since the modern components implement hardware security mechanisms, like error detection and correction code or memory encryption. During the program execution, an attacker physically injects energy into a memory cell to change its state. Thus, up to the underlying technology, the memory physically takes the value $0x00$ or $0xFF$. If memories are encrypted, the physical value becomes a random value (more precisely a value which depends on the data, the address, and an encryption key). To be as close as possible to the reality, we chose the most realistic fault model, the precise byte error. So an attacker:
- can make FI at a precise clock cycle (can target any operation he wants),
- can only set or reset a byte to $0x00$ or to $0xFF$ up to the underlying technology (bit set or reset fault type), or he can change this byte to a random value beyond his control (random fault type),
- can target any of the memory cell he wants (can target a specific variable or register).
Nowadays, the Information Technology Security Evaluation Facilities (ITSEF) are using low power laser diodes to illuminate the smart card. This technology drastically reduces the charging period of the laser. Taking this approach as a hypothesis, the attacker can now attack a program and a given countermeasure at the same time which makes the traditional applicative countermeasures ineffective.
### 3.3 Effects of the Fault Attacks on the Program Execution
In this work, only a single fault is considered. However our proposed mechanism supports dual faults since it is protected by some checksum method. An attacker can break the confidentiality and/or the integrity mechanisms incorporated in the card. The code integrity of the program ensures that the original installed code is the same as the one executed by
the card. The data of a program are also a sensitive asset to be protected. With a single fault, an attacker can permanently or temporarily, modify a sensitive information. In particular, it can affect the variables used in any evaluation instruction like never start a loop, ignore initialisation and so on. The smart card should ensure the confidentiality of the assets. The attacker may modify the data to be copied, from the application byte array or to the I/O smart card buffer by modifying the address of the buffer.
As seen, one of the effects of the fault is to modify the value of a register. The JVM registers are highly sensitive. For example, the Java Program Counter (JPC) can be altered by a fault. A fetch sequence of the byte code to be interpreted is shown in Listing 5. In this interpreter loop, the address of the function corresponding to the byte code to be interpreted is stored into the array bytecode_table. The index is obtained through vm_pc which is pointed to the content of the currently executed method.
### Listing 5: Fetch of the next instruction.
```java
handler = bytecode_table[vm_pc];
vm_pc++; // jpc is updated
bc_action = handler();
```
This JCVM was compiled for an ARM7 target and a code fragment is given in Listing 6. In line 1809, the local variable that stores the vm_pc is loaded into r3 which corresponds to the 32 bit instruction 012083E2 which is in fact E2 83 20 01, regardless of the endianess representation. Thus, if a laser hits and nullifies the third byte, the instruction becomes E2 83 00 01 which corresponds to the instruction add r0, r3, #1 storing into r0 the new value of the vm_pc variable. But the real storage is line 1811 and it stores the content of the r2 register which has the value stored line in 1802.
### Listing 6: Fetch at the binary level.
```assembly
_handlers (jvm.c:558) handle= bytecode_table[vm_pc];
*
loc 1 558 0 ; j_vm.c:558
* handler= bytecode_table[vm_pc];
003093E5 LDR r3, .L104+24
012083E2 ADD r2, r3, #1
*
loc 1 559 0 ; j_vm.c:559
* vm_pc++;
04319FE5 LDR r3, .L104+24
003083E5 LDR r3, [r3, #0]
012083E2 ADD r2, r3, #1
*
loc 1 560 0 ; j_vm.c:560
* bc_action = handler();
002083E5 LDR r3, .L104+24
012083E2 ADD r2, r3, #1
003083E5 LDR r3, [r3, #0]
*
loc 1 561 0 ; j_vm.c:561
* bc_action = handler();
012083E2 ADD r2, r3, #1
003083E5 LDR r3, [r3, #0]
```
Within the code fragment shown above in the Listing 6, one can see that a simple fault can lead to the interpretation that jumps are not to the next expected byte code. Especially it can avoid a given method invocation, ignore a condition loop or it can jump to a specific statement. Likewise, by modifying the destination or source register, an attacker can modify the value returned by a function which allows the execution of a sensitive code without
authorisation, avoiding initialisation of variables. He can also generate a faulty condition to jump. If the destination of the jump corresponds to an operand instead of a byte code, he can execute a different program often called mutant program in the literature.
Listing 7: Faulty fetch of the next instruction.
```java
handler = bytecode_table[vm_pc];
vm_pc = *vm_pc;
bc_action = handler();
```
Now, the binary code has a new semantics after the fault occurs which is showed in Listing 7. As one can notice, the `vm_pc` gets the value of the current byte code to which it points, that can lead to a jump anywhere, especially into a static array stored just after the method.
Evaluating the effects of fault on a binary program is quite impossible due to the combinatorial possibilities. An analysis is often dedicated to a given target and a small function. Up to now, only generic solutions are applied and are often at the applicative level. Checking the integrity of the code with some hash functions is useless against transient faults, since the checked code is not the one executed by the system.
4 Fault Detection Mechanisms
The fault detection mechanism can be classified into three countermeasure approaches: static, dynamic and mixed.
4.1 Static Countermeasure Approach
Static countermeasures ensure whether each test is done correctly and/or the program CFG remains unchanged as described by the developer. It is done at the applicative layer. Here the main advantage is that the developer has the knowledge of the assets to be protected. Apart from that, the knowledge of fault attacks is also very important to implement security features. Two examples of applicative countermeasures are explained below.
The redundancy if-then-else statement can be used to improve the security of the branching statement to verify if a test (i.e. a sensitive condition if) is performed correctly. For example, in order to verify a PIN code, a call to `pinIsValidated()` should be performed, which returns `true` if the PIN code has been verified previously. `pinIsValidated()` is provided by the PIN Java-interface. If the PIN code is not validated, the program will check it again whether the condition did not occur before executing an operation. If the condition occurred without having a call to the adequate method (i.e. the `verifyPIN()`) that means some external phenomenon has modified the state of the PIN object during the transfer of data on the bus.
Indeed, if a fault is injected during an if condition, an attacker can execute a specific statement without a check. In real time, a 2\textsuperscript{nd} order FI is difficult with a short delay between two injections. A 2\textsuperscript{nd} order if statement can be used to verify the requirements needed to access a critical operation in order to prevent a faulty execution of an if-then-else statement. An example of this kind of implementation is listed in the Listing 8. The problem with a
Security Automaton to Mitigate Laser-based Fault Attacks on Smart Cards
Listing 8: Protected if statement.
```java
// condition is a boolean
if(pinIsValidated()) {
if(pinIsValidated()) {
// Critical operation
} else { //Attack detected!}
} else {
if(!pinIsValidated()) {
// Access not allowed
} else{ //Attack detected!}
}
```
secure if condition is that the CFG of the program is not guaranteed.
The second applicative countermeasure is a step counter approach. The developer can implement this method as described in the Listing 9 to make sure that the control flow has been respected and also the correctness of the program execution flow. Here several check points can be inserted and each node of the CFG defined by the developer, is verified during the runtime. If a step counter is set with a wrong value at the execution time, a faulty behaviour can be detected. In the Listing 9, a variable `step_counter` is initialised and incremented until it reaches a sensitive node. At that particular point, its value is compared with the expected value and if a mismatch arises, it can cause an unexpected behaviour, and a security action must be taken.
Listing 9: Step counter.
```java
short step_counter=4;
if(step_counter==4) {
// Critical operation 1
step_counter++;
} else { //Attack detected!}
/
else { //Attack detected!}
/
if(step_counter==5) {
// Critical operation 2
step_counter++;
} else{ //Attack detected!}
```
4.2 System based or Dynamic countermeasure approach
In the applicative countermeasure approaches, the developer himself is in charge of securing his code. Another approach is a system based countermeasure, which is used to provide security mechanisms by the system itself. Most of these countermeasures need an automatic off card static analysis by the applet in order to reduce the run time cost.
To prevent the modification of the dynamic elements (stack, data, etc.), and also to ensure integrity, the smart cards can implement countermeasures on stack and data. A checksum can be used to verify the manipulated value for each operation. Another low cost countermeasure approach, to protect stack element against FI attack was explained by Dubreuil et al. (2013). Their countermeasure implements the principle of a dual stack where each value is pushed from the bottom and growing up into the stack element. In contrary, each reference is pushed from the top and growing down. This countermeasure
Bouffard, N Thampi and Lanet protects smart card against type confusion attack.
As described before, a program’s code is also an asset to be protected. The memory can be encrypted to ensure the confidentiality of the code. For using a more affordable countermeasure, Barbu (2012) purposed a method to scramble the code. Unfortunately, a brute force attack can bypass a scrambled memory. Razafindralambo et al. (2012) improved this countermeasure based on a randomised scrambling operation to protect the code confidentiality.
Enabling all the countermeasures during the complete program execution is too expensive for the card to afford and also it is not required. Hence to reduce the implementation cost of the countermeasure, Barbu et al. (2012) proposed user-enabled countermeasure(s) in which the developer has the choice to enable a specific countermeasure for a particular code fragment.
Recently, Farissi et al. (2013) presented an approach based on artificial intelligence, particularly in neural networks. This mechanism is included in the JCVM. After a learning step, this mechanism can dynamically detect the abnormal behaviour of each smart card’s program.
4.3 Mixed Countermeasure Approach
Unlike the previous approaches, mixed methods use off-card operations where some computations are performed for embedded run-time checks. This way offers a low cost with respect to the costly operations realised outside the card.
To ensure the code integrity, Prevost & Sachdeva (2006) patented a method, in which a hash value is computed for each basic block of a program. The program is sent to the card with the hash of each basic block. During the execution, the smart card verifies this value for each executed basic block and if a hashsum is wrong, an abnormal behaviour is detected.
Al Khary Séré (2010) described three countermeasures, based on bit field, basic block and path check, to protect smart card against FI attacks. These countermeasures require off-card operations done during the compilation step to compute enough information, which is to be provided to the smart card through a custom component. The smart card checks the correctness of the current CFG dynamically. Since there are off-card operations, this countermeasure has a low footprint in the smart card’s runtime environment.
In this section, we described some published countermeasures to prevent FI attacks. A summarize of the assets protected by each countermeasure is shown in the Table 1.
5 Security Automata and Reference Monitor
Detecting a deviant behaviour is considered as a safety property, *i.e.* properties that state “nothing bad happens”. A safety property can be characterised by a set of disallowed finite execution based on regular expressions. The authorised execution flow is a particular safety
Table 1 Sum up of the FI protection mechanisms.
<table>
<thead>
<tr>
<th>Countermeasures</th>
<th>Code Protection</th>
<th>Data Protection</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Integrity</td>
<td>Confidentiality</td>
</tr>
<tr>
<td>i.e. statement</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Step counter</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Checksum</td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>Dubreuil et al. (2013)</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Barbu (2012)</td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>Farissi et al. (2013)</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Prevost & Sachdeva (2006)</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Al Khary Séré (2010)</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
property which means that the static control flow must match exactly the runtime execution flow without attacks. For preventing such attacks, we define several partial traces of events as the only authorised behaviours. The key point is that this property can be encoded by a finite state automaton, while the language recognised will be the set of all authorised partial traces of events.
5.1 Principle
Schneider (2000) defined a security automaton, based on Büchi automaton as a triple \((Q, q_0, \delta)\) where \(Q\) is a set of states, \(q_0\) is the initial state and \(\delta\) a transition function \(\delta: (Q \times I) \rightarrow 2^Q\). The \(S\) is a set of input symbols, i.e. the set of security relevant actions. The security automaton processes a sequence of input symbols \(s_1, s_2, \ldots, s_n\) and the sequence of symbols is read as one input at a time. For each action, the state is evaluated by starting from the initial state \(s_0\). As each \(s_i\) is read, the security automaton changes \(Q\) in \(\cup_{q \in Q} \delta(s_i, q)\). If the security automaton can perform a transition according to the action, then the program is allowed to perform that action, otherwise the program is terminated. Such a mechanism can enforce a safety property as in the case for checking the correctness of the execution flow.
The property we want to implement here is a redundancy of the control flow. In the first approach, the automaton that verifies the control flow could be inferred using an inter procedural CFG analysis. In a such a way, the initial state \(q_0\) is represented by any method’s entry point. \(S\) is made of all the byte codes that generate a modification of the control flow along with an abstract instruction \(join\) representing any other instructions pointed by a label. By definition, a basic block ends with a control flow instruction and start either by the first instruction after control flow instructions or by an instruction preceding a label. When interpreting a byte code, the state machine checks if the transition generates an authorised partial trace. If not, it takes an appropriate countermeasure.
The transition functions are executed during byte code interpretation which follows the isolation principle of Schneider. Using a JCVM, it becomes obvious that the control of the security automaton will remain under the control of the runtime and the program cannot interfere with automaton transitions. Thus, there is no possibility for an attacker to corrupt the automaton because of the Java sandbox model. Of course, the attacker can corrupt the automaton using the same means as he corrupted the execution flow. By hypothesis, we
do not actually consider the double FI possibility for an attacker. If needed, it is possible to protect the automaton with an integrity check verification before each access into the automaton.
5.2 Security Automaton Included in a JCVM
We present here a code fragment 10 extracted by Girard et al. (2010) from the Internet protocol payment defined by Gemalto. It starts by an array initialisation with a loop followed by a call to the method update() in order to initialise the PIN code and a call to register() to register the applet into the card.
Listing 10: Source code of the payment applet.
protected ProtocolPayment (byte[] buffer, short offset, byte length) {
A[0] = 0; // init. of array A
for (byte j = 0; j < buffer[(byte)(offset+12)]; j++)
D[j] = 0; // init. of array D
pin = new OwnerPIN((byte) TRY_LIMIT, (byte) MAX_PIN_SIZE);
pin.update(myPin, (short) START_OFFSET, (byte) myPin.length); // initialisation of pin
register(); // register this instance
}
The set $S$ is made of elements of a language which expresses the control flow integrity policy, i.e. all the binary instructions controlling the program flow: ifeq, ifne, goto, invoke, return, … plus the dummy instruction join. In this example, the number of loop iterations cannot be statically computed, but it can be represented by a regular expression. The CFG of this program is given in Figure 3.
Figure 3: CFG of the applet constructor.
The first block ends with a goto, the end of the second block precedes a label join and the last one finishes with return. Inside basic block, they are calls to other methods, the first one is the constructor of the super class. In the fourth block we have a call to the constructor of OwnerPIN followed by the method update and finally the register.
Each invoked method has its own CFG and its own automaton. This automaton represents an abstraction of the program (its CFG) and is used by the monitor to control the execution. The automaton can be built statically off card and loaded with the applet as an optional component of the CAP file or the construction of the automaton can be done by the card itself while loading the code. The code is always loaded in a safe environment and there should not be any attack during this phase. A simple integrity check will preserve the code or the automaton to be altered before being stored into the card and this point is discussed later.
The trace recognized for this method would be: (goto, ifscmpt*, join, return). The automaton that recognizes this trace is shown in Figure 4. The condition of the loop can be evaluated at least once. In fact, the trace can be more precise: the call to the methods and use of reference to checks can be taken into account, if the control flow has been correctly transferred to the called method. Thus, the recognized trace becomes: (invokespecial 6, goto, ifscmpt*, join, invokespecial 5, invokevirtual 7, invokevirtual 8, return), as given in Figure 5.
Such a state machine can be easily represented by an array (see Table 2), allowing the system to check if the current state allows to change the state to a requested one for each
Table 2 Basic representation of the automata.
<table>
<thead>
<tr>
<th>Transition Function</th>
<th>$q_0$</th>
<th>$q_1$</th>
<th>$q_2$</th>
<th>$q_3$</th>
<th>$q_4$</th>
<th>$q_5$</th>
<th>$q_6$</th>
<th>$q_7$</th>
</tr>
</thead>
<tbody>
<tr>
<td>invokespecial 5</td>
<td>$q_0$</td>
<td>$q_1$</td>
<td>$q_2$</td>
<td>$q_3$</td>
<td>$q_4$</td>
<td>$q_5$</td>
<td></td>
<td></td>
</tr>
<tr>
<td>goto</td>
<td></td>
<td>$q_1$</td>
<td>$q_2$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>join</td>
<td></td>
<td>$q_1$</td>
<td></td>
<td>$q_2$</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ifscmpt</td>
<td></td>
<td>$q_1$</td>
<td>$q_2$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>invokespecial 6</td>
<td></td>
<td>$q_1$</td>
<td>$q_2$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>invokevirtual 7</td>
<td></td>
<td>$q_1$</td>
<td>$q_2$</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>invokevirtual 8</td>
<td></td>
<td>$q_1$</td>
<td></td>
<td>$q_2$</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
transition function. Moreover, keeping trace of the JPC allows a fine grain control of the CFG. For example, if the JCVM encounters the instruction goto label, it checks if in the current state (say for example $q_1$) such an instruction is allowed, if not, it takes an adequate countermeasure. If in current state the instruction is allowed, the JCVM checks whether the destination is an expected one, i.e., $q_2$ by verifying the label of the instruction or the token of the invoked method. If the instruction is a return, it verifies either it is the last instruction or the next instruction has a label.
5.3 The Reference Monitor
The control of the transition functions is quite obvious. Once the automaton array has been built statically either off-card or during the linking process, each Java frame is updated with the value of the current state $q_i$. In the case of a multithreaded virtual machine, each thread manages the state of the current security automaton method in its own Java frame for each method. Knowing the current state and the current instruction, it is easy to check the source and the destination while executing an the instruction related to control flow. Unfortunately such a matrix is not compatible with a highly constrained device like the smart card. Thus, we need to have a compact representation inside the card.
Listing 11: Transition function for the ifle byte code (next instruction).
```java
1 int16 BC_ifle(void) {
2 if (SM[frame->currentState][INS] != *vm_pc)
3 return ACTION_BLOCK;
4 vm_sp -= 2;
5 if (vm_sp[0].i <= 0) return BC_goto();
6 if (SM[frame->currentState][NEXT] != state(vm_pc))
7 return ACTION_BLOCK;
8 vm_pc += 2;
9 frame->currentState = SM[frame->currentState][NEXT];
10 return ACTION_NONE; }
```
The automaton is stored as an array with several columns like the next state, the destination state and the instruction that generates the end of the basic blocks. In the Listing 11, the test (in line 2) verifies that the currently executed instruction is the one stored in the method area. According to the fault model, a transient fault should have been
generated during the instruction decoding phase. If it does not match, the JCVM stops the execution (line 3). If the evaluation condition is true, it jumps to the destination (line 5). Else, it checks whether the next Java program pointer is a valid state for the current state of the automaton. If it is allowed, the automaton changes its state.
Listing 12: Transition function for the ifle byte code (target jump).
```c
int16 BC_goto(void) {
vm_pc = vm_pc - 1 + GET_PC16;
if (SM[frame>currentState][DEST] != state(vm_pc))
return ACTION_BLOCK;
frame->currentState = SM[frame->currentState][DEST];
return ACTION_NONE;
}
```
In the Listing 12, the last part of the ifle byte code also checks the destination JPC matches with the next state and then updates the current state.
6 Metrics
The modification of the JCVM affects the interpreter and potentially the linker-loader if one prefers to build an on-the-fly state machine instead of implementing it as an additional component of the CAP file. In this proof of concept, we implement it as an additional component. The overhead must be evaluated in terms of ROM, RAM and EEPROM memory. The RAM being the more scarce resource, an optimisation is needed for the implementation with this criteria. The Java frame has been modified by adding a byte for storing the current value of the state. The cost of the RAM overhead is one byte per method call. The second memory to be optimised is the EEPROM. It contains the matrix storing the automaton \( SM \) for each method. It can be written once during the load and read until the applet is removed from the card. It is a two dimensional array with a particular entry to manage instructions having multiple jumps like \( \text{tableswitch}, \text{lookupswitch} \ldots \). We did not make an optimisation of this structure in order to maintain a direct access in \( O(1) \). For an already installed Java Card application (API, Romised Applets, etc.) this table is burned in the ROM area which is less constrained. So the memory overhead is minimal for the RAM, and for the EEPROM, it depends on the structure of methods for the application uploaded in post-issuance.
The second metrics is about the execution time overhead. Each Java Card instruction requires two cycles: \( \text{prefetch} \) and \( \text{execute} \). The \( \text{prefetch} \) is fixed regardless of the automaton implementation. In our implementation of the Java Card on an ARM7, it costs 0.96\( \mu s \). The \( \text{execute} \) cycle costs, for the if scmp1t, 0.615\( \mu s \). In fact the modification of the interpreter increases the execution time by 0.332\( \mu s \). The instruction that needed 1.575\( \mu s \) requires now 1.907\( \mu s \) says an overhead of 19%. But only the instructions that change the control flow are modified, \( i.e. \) 45 instructions over the 184 instructions of the Java Card set. Of course the overhead depends on the used instructions in the method. In the given example, Listing 10, only 7 instructions over the 93 have an overhead.
7 Related Works
Aktug (2008) defined a formal language for security policy specifications, ConSpec, to prove statically that a monitor can be inlined into the program byte code, by adding 1st order logic annotations. They use a weakest precondition computation that works as same as the annotation propagation algorithm that is used by Pavlova et al. (2004) to produce a fully annotated, verifiable program for the Java Card. This allows the use of Java Modeling Language (JML) verification tools, to verify the actual policy adherence. Such a static approach cannot be adopted here due to the dynamic nature of the attack.
The only application of the security automaton for smart card was presented by McDougall et al. (2004) where the concept of policy automaton which combines the defeasible logic with a state machine was used. It represents the complex policies as a combination of the basic policies. A tool has been implemented for performing policy automaton analysis and checking policy conflicts. A code generator was used to implement the transition functions that creates a Java Card applet. It was concerned mainly to enforce invariants in the application.
8 Conclusion
In this work we introduced and implemented a countermeasure to detect the FI attacks for smart cards. We presented an automatic method to obtain control flow redundancy using a security automaton executed in the kernel mode. The automaton was generated automatically during the linking process or by an off-card process. This automaton is modeled by a regular expression which describes each instruction to be executed. We also presented the metrics of our Java Card implementation on an ARM7 processor. From the implementation we concluded that the proposed method is a cost effective and efficient one.
This technique is not only limited to CFG properties but it can be used for more general security policies expressed as safety properties. It is interesting to check whether some security commands have already realised before executing a sensitive operation. Some are memorised in a secured container (i.e. the PIN code field isValidated), but some of them use unprotected fields and could be subjected to FI attacks. The difficulty here is to find a right trade-off between the highly secured system with a poor run-time performance and an efficient system with less security.
References
Security Automaton to Mitigate Laser-based Fault Attacks on Smart Cards
Bouffard, N Thampi and Lanet
|
{"Source-Url": "https://www.bouffard.info/assets/pdf/journals/ijtmcc/BouffardTL14.pdf", "len_cl100k_base": 10857, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 48043, "total-output-tokens": 13819, "length": "2e13", "weborganizer": {"__label__adult": 0.0005154609680175781, "__label__art_design": 0.00048828125, "__label__crime_law": 0.0011615753173828125, "__label__education_jobs": 0.0006465911865234375, "__label__entertainment": 0.0001003742218017578, "__label__fashion_beauty": 0.00021636486053466797, "__label__finance_business": 0.00038361549377441406, "__label__food_dining": 0.0003693103790283203, "__label__games": 0.0010242462158203125, "__label__hardware": 0.006969451904296875, "__label__health": 0.0007266998291015625, "__label__history": 0.00030875205993652344, "__label__home_hobbies": 0.00015354156494140625, "__label__industrial": 0.0009565353393554688, "__label__literature": 0.0002503395080566406, "__label__politics": 0.0003864765167236328, "__label__religion": 0.0005898475646972656, "__label__science_tech": 0.1728515625, "__label__social_life": 9.107589721679688e-05, "__label__software": 0.01107025146484375, "__label__software_dev": 0.79931640625, "__label__sports_fitness": 0.0003581047058105469, "__label__transportation": 0.0008034706115722656, "__label__travel": 0.0001800060272216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53310, 0.04167]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53310, 0.60667]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53310, 0.88854]], "google_gemma-3-12b-it_contains_pii": [[0, 2199, false], [2199, 5480, null], [5480, 8147, null], [8147, 11328, null], [11328, 14412, null], [14412, 17049, null], [17049, 19604, null], [19604, 21551, null], [21551, 24352, null], [24352, 27323, null], [27323, 29805, null], [29805, 32615, null], [32615, 35884, null], [35884, 37343, null], [37343, 39064, null], [39064, 41945, null], [41945, 45013, null], [45013, 47657, null], [47657, 50737, null], [50737, 53310, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2199, true], [2199, 5480, null], [5480, 8147, null], [8147, 11328, null], [11328, 14412, null], [14412, 17049, null], [17049, 19604, null], [19604, 21551, null], [21551, 24352, null], [24352, 27323, null], [27323, 29805, null], [29805, 32615, null], [32615, 35884, null], [35884, 37343, null], [37343, 39064, null], [39064, 41945, null], [41945, 45013, null], [45013, 47657, null], [47657, 50737, null], [50737, 53310, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53310, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53310, null]], "pdf_page_numbers": [[0, 2199, 1], [2199, 5480, 2], [5480, 8147, 3], [8147, 11328, 4], [11328, 14412, 5], [14412, 17049, 6], [17049, 19604, 7], [19604, 21551, 8], [21551, 24352, 9], [24352, 27323, 10], [27323, 29805, 11], [29805, 32615, 12], [32615, 35884, 13], [35884, 37343, 14], [37343, 39064, 15], [39064, 41945, 16], [41945, 45013, 17], [45013, 47657, 18], [47657, 50737, 19], [50737, 53310, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53310, 0.05376]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
64caa607bec7de223191aa2443b8231154e65847
|
[REMOVED]
|
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783540764397-c1.pdf?SGWID=0-0-45-515407-p173780334", "len_cl100k_base": 10837, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 39274, "total-output-tokens": 11143, "length": "2e13", "weborganizer": {"__label__adult": 0.0003294944763183594, "__label__art_design": 0.00026726722717285156, "__label__crime_law": 0.00028824806213378906, "__label__education_jobs": 0.0008692741394042969, "__label__entertainment": 4.667043685913086e-05, "__label__fashion_beauty": 0.00014400482177734375, "__label__finance_business": 0.00017178058624267578, "__label__food_dining": 0.0002758502960205078, "__label__games": 0.0005369186401367188, "__label__hardware": 0.0006165504455566406, "__label__health": 0.0003662109375, "__label__history": 0.00018346309661865232, "__label__home_hobbies": 7.37309455871582e-05, "__label__industrial": 0.0002655982971191406, "__label__literature": 0.0002658367156982422, "__label__politics": 0.0001760721206665039, "__label__religion": 0.00035691261291503906, "__label__science_tech": 0.0074005126953125, "__label__social_life": 8.046627044677734e-05, "__label__software": 0.0041656494140625, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0002510547637939453, "__label__transportation": 0.00031685829162597656, "__label__travel": 0.0001608133316040039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47917, 0.03066]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47917, 0.56971]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47917, 0.93074]], "google_gemma-3-12b-it_contains_pii": [[0, 2485, false], [2485, 4208, null], [4208, 7058, null], [7058, 10231, null], [10231, 12654, null], [12654, 15367, null], [15367, 17736, null], [17736, 20024, null], [20024, 23459, null], [23459, 26064, null], [26064, 28780, null], [28780, 30562, null], [30562, 31633, null], [31633, 34203, null], [34203, 35838, null], [35838, 37129, null], [37129, 40159, null], [40159, 42972, null], [42972, 45878, null], [45878, 47815, null], [47815, 47917, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2485, true], [2485, 4208, null], [4208, 7058, null], [7058, 10231, null], [10231, 12654, null], [12654, 15367, null], [15367, 17736, null], [17736, 20024, null], [20024, 23459, null], [23459, 26064, null], [26064, 28780, null], [28780, 30562, null], [30562, 31633, null], [31633, 34203, null], [34203, 35838, null], [35838, 37129, null], [37129, 40159, null], [40159, 42972, null], [42972, 45878, null], [45878, 47815, null], [47815, 47917, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47917, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47917, null]], "pdf_page_numbers": [[0, 2485, 1], [2485, 4208, 2], [4208, 7058, 3], [7058, 10231, 4], [10231, 12654, 5], [12654, 15367, 6], [15367, 17736, 7], [17736, 20024, 8], [20024, 23459, 9], [23459, 26064, 10], [26064, 28780, 11], [28780, 30562, 12], [30562, 31633, 13], [31633, 34203, 14], [34203, 35838, 15], [35838, 37129, 16], [37129, 40159, 17], [40159, 42972, 18], [42972, 45878, 19], [45878, 47815, 20], [47815, 47917, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47917, 0.26562]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
19aa9ab0bff7406fc60d7875f742000db45094dd
|
[REMOVED]
|
{"Source-Url": "https://hal.univ-lorraine.fr/hal-02983256/file/main.pdf", "len_cl100k_base": 11375, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 59915, "total-output-tokens": 16284, "length": "2e13", "weborganizer": {"__label__adult": 0.00046443939208984375, "__label__art_design": 0.0003075599670410156, "__label__crime_law": 0.0005779266357421875, "__label__education_jobs": 0.0004024505615234375, "__label__entertainment": 6.443262100219727e-05, "__label__fashion_beauty": 0.0001933574676513672, "__label__finance_business": 0.0002655982971191406, "__label__food_dining": 0.00041794776916503906, "__label__games": 0.0008454322814941406, "__label__hardware": 0.002162933349609375, "__label__health": 0.0007314682006835938, "__label__history": 0.00029468536376953125, "__label__home_hobbies": 0.00010317564010620116, "__label__industrial": 0.0006365776062011719, "__label__literature": 0.00023233890533447263, "__label__politics": 0.0003724098205566406, "__label__religion": 0.0006933212280273438, "__label__science_tech": 0.049346923828125, "__label__social_life": 6.902217864990234e-05, "__label__software": 0.005756378173828125, "__label__software_dev": 0.9345703125, "__label__sports_fitness": 0.0003991127014160156, "__label__transportation": 0.0007123947143554688, "__label__travel": 0.00022470951080322263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65167, 0.04017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65167, 0.19694]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65167, 0.87702]], "google_gemma-3-12b-it_contains_pii": [[0, 1157, false], [1157, 3652, null], [3652, 7182, null], [7182, 10685, null], [10685, 13927, null], [13927, 17256, null], [17256, 20333, null], [20333, 22604, null], [22604, 25981, null], [25981, 29255, null], [29255, 32489, null], [32489, 34615, null], [34615, 38056, null], [38056, 41326, null], [41326, 42643, null], [42643, 45345, null], [45345, 48805, null], [48805, 52234, null], [52234, 55170, null], [55170, 58734, null], [58734, 62243, null], [62243, 65167, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1157, true], [1157, 3652, null], [3652, 7182, null], [7182, 10685, null], [10685, 13927, null], [13927, 17256, null], [17256, 20333, null], [20333, 22604, null], [22604, 25981, null], [25981, 29255, null], [29255, 32489, null], [32489, 34615, null], [34615, 38056, null], [38056, 41326, null], [41326, 42643, null], [42643, 45345, null], [45345, 48805, null], [48805, 52234, null], [52234, 55170, null], [55170, 58734, null], [58734, 62243, null], [62243, 65167, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65167, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65167, null]], "pdf_page_numbers": [[0, 1157, 1], [1157, 3652, 2], [3652, 7182, 3], [7182, 10685, 4], [10685, 13927, 5], [13927, 17256, 6], [17256, 20333, 7], [20333, 22604, 8], [22604, 25981, 9], [25981, 29255, 10], [29255, 32489, 11], [32489, 34615, 12], [34615, 38056, 13], [38056, 41326, 14], [41326, 42643, 15], [42643, 45345, 16], [45345, 48805, 17], [48805, 52234, 18], [52234, 55170, 19], [55170, 58734, 20], [58734, 62243, 21], [62243, 65167, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65167, 0.1063]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
52c80c6ceb8480c5f38814eca045cd2d9351c4af
|
Use offense to inform defense. Find flaws before the bad guys do.
Copyright SANS Institute
Author Retains Full Rights
Interested in learning more?
Check out the list of upcoming events offering "Hacker Tools, Techniques, Exploits, and Incident Handling (SEC504)" at https://pen-testing.sans.org/events/
Automated Security Testing of Oracle Forms Applications
GIAC (GWAPT) Gold Certification
Author: Bálint Varga-Perke, vpbalint@silentsignal.hu
Advisor: Rick Wanner
Accepted: May 15, 2015
Abstract
Oracle Forms, a component of Oracle Fusion Middleware is a technology to efficiently build browser-based enterprise applications. In order to support multiple transport methods Forms has its own binary message format that is meant to provide serialization and additional security for the platform. Unfortunately this proprietary format renders conventional security testing tools unusable. Reverse engineering methods will be employed to reveal the format of the protocol messages and to analyze the cryptographic protections in use. It will be shown that the proprietary encryption and key exchange schemes can be attacked in multiple ways. New tools will be presented which can be used to exploit these weaknesses and allow existing security testing software to be used against Oracle Forms applications. Based on the observations deployment best practices will also be described to help mitigate the discussed problems.
1. Introduction
To keep up with the increasing rate of web application attacks (Imperva, 2014) a wide variety of automated security testing tools have been developed (OWASP, 2014). These tools rely on interface and protocol standards so they can be used against various applications.
But what if some of these well-known interfaces are replaced with proprietary alternatives? How does such a change affect the work of a penetration tester and the opportunities of an attacker? This paper explores these questions through the example of Oracle Forms.
Oracle Forms, a component of Oracle Fusion Middleware is a technology to rapidly develop browser-based enterprise applications (Oracle Corporation). The framework is implemented in Java, client-side components run as applets inside the browser. Instead of using standard HTTP requests Oracle Forms implements a proprietary binary protocol that can be used over raw TCP channels or embedded in HTTP. The protocol is encrypted which renders conventional testing tools useless, since they are unable to read or modify messages in plain text.
Oracle Forms is based on the Java applet technology to display client-side content. Applets used to provide attractive features long before they were incorporated in web browsers (Schuh, 2013). Today the majority of these once unique features became part of web browsers (Microsoft Corporation, 2009.). Also the popularity of applets is declining because of security concerns (Mimoso, 2013.). Despite these trends Oracle is dedicated to continue to support Forms (Oracle Corporation, 2012.). Moreover, migration to the more modern Application Development Framework (Oracle Corporation, 2014.) is not supported by the vendor (Oracle Corporation, 2012.).
To support the testing of Oracle Forms applications this paper gives detailed description of Oracle’s propriatery protocol. Multiple vulnerabilities of the encryption scheme are shown which can be exploited by passive and active network attackers. Based on the analysis the possibilities and challenges of automated testing are assessed. Proof of
concept software is presented that demonstrate attacks, and can be used directly or as a base of more advanced tools for security testing Oracle Forms applications.
At the time of writing the latest version of Oracle Forms is 11g, all findings are based on this version. Source code of the presented tools are available at http://github.com/v-p-b/oracle_forms/.
2. Security Analysis of Oracle Forms
2.1. The Oracle Forms Protocol
To assess the security of a web application framework (and the applications built on it) detailed understanding of its communication protocol is required. Oracle Forms is written in Java so the byte-code of its components can be easily decompiled. This way a “white box” approach (Girish Janardhanudu, 2005) can be taken for protocol analysis.
The main source of information was the decompiled byte-code of the frmall.jar archive. The frmall.jar archive contains the framework code that is to be loaded client side when using Oracle Forms. The code snippets in the following subsections are taken from this archive. The decompilation was done using the JAD decompiler (Java Decompilers Online, 2015) via the following command (Linux BASH):
```
unzip frmall.jar; find oracle/ -name '*.class' -execdir jad {} \
```
Among the decompiled pseudo-code the encrypted communication was intercepted and analyzed with Burp Suite Professional (PortSwigger Ltd.).
2.1.1. Encryption and Key Exchange
Since Oracle Forms supports plain text communication channels the framework provides additional encryption in the application layer (Oracle Corporation, 2009). The documentation provided by the vendor doesn’t specify the level of protection this encryption layer is meant to provide. The FAQ only states that the encryption scheme is “not as strong as the SSL standard” (Oracle Corporation, 2009). An older version of the documentation states that 40-bit RC4 encryption is used when communicating over plain text channel (Oracle Corporation, 2009). RC4 is a symmetric stream cipher designed by Ron Rivest in 1987 (Ron L. Rivest, 2014) and later became part of important protocols
such as the SSL and TLS protocol families (IETF, 2008.). Later the cipher was found to be vulnerable against multiple cryptographic attacks. An overview of these attacks is presented in section 2.2.1.
Aside from the cryptographic strength of the employed cipher, the security of a protocol is also dependent on how the algorithm is used. The oracle.forms.net package contains the classes related to network communication. Communication is supported over raw TCP sockets or wrapped inside HTTP or HTTPS protocols.
Connection classes (HTTPConnection, SocketConnection) are responsible for establishing communication channels between the Oracle Forms client and server. This task includes cryptographic negotiations and proxy support. Key exchange is performed by these classes in the following way (taken from the HTTPConnection class):
dataoutputstream.writeInt(NEG_SEND); // NEG_SEND = 0x47446179
int i;
dataoutputstream.writeInt(i = (new Random()).nextInt());
dataoutputstream.flush();
int k = datainputstream.readInt();
int j = datainputstream.readInt();
if(k == NEG_RESPONSE) // NEG_RESPONSE = 0x4d617465
{
byte abyte0[] = new byte[5];
abyte0[0] = (byte)(i >> 8);
abyte0[1] = (byte)(j >> 4);
abyte0[2] = -82;
abyte0[3] = (byte)(i >> 16);
abyte0[4] = (byte)(j >> 12);
if(mUseNativeHTTP)
mHNs.setEncryptKey(abyte0);
else
mHs.setEncryptKey(abyte0);
}
The client first sends the NEG_SEND constant (equivalent to the ASCII “GDay” string) to the server along with a random integer. Then it waits for the server to send the NEG_RESPONSE constant (equivalent to the ASCII “Mate” string) and another integer.
The two integers are then used to construct the 5 byte (40-bit) long *abyte* byte array that is passed to the *setEncryptKey()* methods of two *Stream* objects.
*Stream* objects are either instances of the *EncryptedInputStream* and *EncryptedOutputStream* classes or wrappers around these classes (in case of *HTTPConnection*). These classes perform the actual encryption and decryption of the data streams. Review of the pseudo-code confirms that the cipher in use is indeed RC4 and that the key of the cipher is the one passed to the *setEncryptKey()* methods (the relevant pseudo-code is included in the Appendix). This observation confirms that although there are 64 bits of key material exchanged on protocol initiation and the key length of RC4 is 40 bits, the effective key length of the encryption is only 32 bits since the third byte of the key is always -82 (0xAE).
After the key is set encryption and decryption is performed byte-by-byte on the data streams, no message authentication or integrity checking takes place. The structure of the underlying data is determined by the proprietary message format of Oracle Forms.
### 2.1.2. Message Format
Source code review revealed that Oracle Forms implements a custom serialization format to transmit different Java objects over the network. The serialization is implemented in the *Message* class of the *oracle.java.forms.engine* package. A *Message* represents a series of *Properties* representing Java basic types and objects. *Messages* can be nested: a *Message* object can hold other *Message* objects as its *Properties*.
*Messages* start with a variable sized header that identifies the type of the *Message*. *Message* types can describe standard CRUD (Create, Read, Update, Delete) functionality and there are also 4 protocol specific message types: two types to indicate “delta” messages and two types to indicate “client” messages. These message types aren’t further investigated as they don’t affect the data representation format of the protocol.
After the header the *Properties* of the *Message* are serialized sequentially. A serialized property starts with a 2 or 3 byte prefix (header) that indicates its type. After the prefix the determining parameters of the objects follow as an unaligned byte stream. The serialization formats of the serializable types are summarized in the following table:
## 1. Table Object serialization formats
<table>
<thead>
<tr>
<th>Type</th>
<th>Property Type Header</th>
<th>Representation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boolean (true)</td>
<td>0x5000</td>
<td>N/A</td>
</tr>
<tr>
<td>Boolean (false)</td>
<td>0x6000</td>
<td>N/A</td>
</tr>
<tr>
<td>Integer (0)</td>
<td>0x1000</td>
<td>N/A</td>
</tr>
<tr>
<td>Integer (0-255)</td>
<td>0x2000</td>
<td>Integer value as 1 byte</td>
</tr>
<tr>
<td>Integer (255-65535)</td>
<td>0x3000</td>
<td>Integer value as 2 bytes</td>
</tr>
<tr>
<td>Integer (other)</td>
<td>0x0000</td>
<td>Value as 4 bytes</td>
</tr>
<tr>
<td>String</td>
<td>0x4000</td>
<td>1 byte identifier (see description below)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Length: 2 bytes</td>
</tr>
<tr>
<td></td>
<td></td>
<td>UTF-8 string buffer</td>
</tr>
<tr>
<td>String reference</td>
<td>0x9000</td>
<td>1 byte identifier</td>
</tr>
<tr>
<td></td>
<td></td>
<td>1 byte new identifier (see description below)</td>
</tr>
<tr>
<td>Byte</td>
<td>0x7000</td>
<td>Byte value</td>
</tr>
<tr>
<td>null</td>
<td>0x8000</td>
<td>N/A</td>
</tr>
<tr>
<td>String[]</td>
<td>0xE00002</td>
<td>Array length: 1 byte</td>
</tr>
<tr>
<td></td>
<td></td>
<td>UTF-8 Strings with 2 byte length prefixes</td>
</tr>
<tr>
<td>Float</td>
<td>0xE00005</td>
<td>Float value</td>
</tr>
<tr>
<td>Date</td>
<td>0xE00006</td>
<td>Timestamp represented on 8 bytes</td>
</tr>
<tr>
<td>byte[]</td>
<td>0xE00007</td>
<td>Array length: 1 byte</td>
</tr>
<tr>
<td></td>
<td>0xE0000F</td>
<td>Array elements</td>
</tr>
<tr>
<td>Message</td>
<td>Serialized Message</td>
<td>Properties</td>
</tr>
<tr>
<td>---------------</td>
<td>------------------------------------</td>
<td>-------------------------------------------------</td>
</tr>
<tr>
<td>Rectangle</td>
<td>0xE0000A</td>
<td>Coordinate X: 2 bytes</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Coordinate Y: 2 bytes</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Width: 2 bytes</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Height: 2 bytes</td>
</tr>
<tr>
<td>Point</td>
<td>0xC000 0xA000</td>
<td>Coordinate X: 1 or 2 bytes</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Coordinate Y: 1 or 2 bytes</td>
</tr>
<tr>
<td>Character</td>
<td>0xE00004</td>
<td>UTF-8 character (2 bytes)</td>
</tr>
<tr>
<td>DeleteMask</td>
<td>0xD000</td>
<td>2-byte identifier</td>
</tr>
</tbody>
</table>
Boolean values, null and the Integer value 0 is encoded inside the property header. Integers bigger than 0 are “optimized”, so that they acquire as many bytes as their value require. All numbers are big endian.
String objects can be cached and then referenced by later Properties. As String objects are encountered, they are stored in a String array indexed with the 1-byte identifier following the type prefix. Later Properties can reference previous Strings by using the 0x9000 type prefix and supplying the String identifier. String references also cause the value of the String cache replaced with another element (pointed by the second byte after the type prefix) of the cache array.
Messages end with the byte value -16 (0xF0). A single HTTP message can hold multiple Messages. The end of the Message sequence is indicated by the 0xF001 byte pair.
To help interpreting decrypted Messages a simple test program (MessageTester.java) was created. The program takes the hexadecimal representation of a message as an argument and parses it using the packages of frmall.jar. The decompiled source of the oracle.forms.engine.Message class can be...
provided to standard debugging tools to allow step-by-step tracing of the parser. A decrypted Message stream is provided below as an example:
```
10000b50adf010002450ae0f0100006a089038402bcf0f01
```
The stream consists of three Messages: two boolean values and a Point object. Message headers (blue) are three bytes long. In case of boolean values the type identifier prefix (orange) encodes the value (0x5000 - true) itself. The coordinate values of the Point are represented as two 2-byte values (green). In this case the Point is at the (900,700) coordinate (0x384,0x2bc). Message terminators (0xF0) are marked red. The final Message is indicated by the 0xF001 sequence (black).
### 2.2. Attacks on Cryptography
#### 2.2.1. Overview of RC4 weaknesses
Because of its cryptographic weaknesses the prohibition of use of RC4 in TLS protocols is proposed by IETF in RFC7465 (Popov, 2015). That decision was induced by the results of AlFardan et al. showing that plaintext recovery is possible under realistic circumstances (Nadhem AlFardan, 2013). The attacks proposed by this research “require a fixed plaintext to be RC4-encrypted and transmitted many times in succession” and “large amounts of ciphertext” to be recorded. The first requirement is met because the initialization messages of Oracle Forms are mostly static (see section 2.2.3). The fulfillment of the second requirement is dependent on the particular target. It’s also worth noting that the attacks were developed against the 128-bit version of the cipher while Oracle Forms uses the weakest, 40-bit version. This may reduce the complexity of attacks significantly.
While the known cryptanalytic attacks seem applicable to Oracle Forms their implementation is out of the scope of this paper. The following sections provide more effective ways to break the security of the communication exploiting the naive use of cryptography.
#### 2.2.2. Passive Network Attacks
In a passive network attack the attacker intercepts the entire communication between the Oracle Forms client and the server. In this case the key exchange scheme
described in 2.1.1 can be trivially attacked: The attacker intercepts both the server- and client-supplied components of the key and constructs the RC4 key that can be used to decrypt the whole encrypted data stream. This approach is implemented in the OracleFormsTester Burp Suite extension accompanying this document.
This attack against the key exchange renders encryption useless because a passive attacker can obtain the key. If no other protections (like TLS or IPsec) are implemented the security of communication is equivalent to plain HTTP.
2.2.3. Active Network Attacks
During an active network attack the attacker can intercept and modify any traffic passing between the Oracle Forms server and client. One of the countermeasures against these attacks is mutual authentication that prevents the attacker from acting as a man in the middle. Oracle Forms doesn’t implement any authentication scheme at the network level, the only requirement for a man-in-the-middle attack is the knowledge of the protocol of the framework.
Another important countermeasure against active attacks is integrity checking that is especially important when using stream ciphers like RC4 to avoid bit-flipping attacks (David LeBlank, 2002). In a bit-flipping attack the attacker takes advantage of the general structure of stream ciphers to modify bits in the decrypted plaintext by flipping bits at the same position in the ciphertext. Since Oracle Forms doesn’t implement integrity protection bit-flipping attacks can be trivially mounted. For the bit-flipping to be meaningful the attacker must predict the position of the data to be corrupted.
In case of Oracle Forms there is an obvious opportunity for exploitation during the standard applet initialization phase. In this phase the client sends an Oracle configuration string to the server. This string usually contains the name or IP address of the Oracle server to connect to. The offset of the server location substring is predictable because the format of the configuration string is known and it is always sent at the start of the first client request.
The following screenshots show how an encrypted request was edited using Burp Suite. The configuration string of the original request was:
Author Name, email@address
server escapeParams=true module=tuto_forms.fmx
userid=SYSTEM/oracle@127.0.0.1/XE sso_userid=%20
sso_formsid=%25OID_FORMSID%25 sso_subDN= sso_usrDN= debug=no
host= port= debug_messages=no
array=no obr=no query_only=no quiet=yes render=no record=
tracegroup= log= term=
The original configuration pointed to the Oracle database at 127.0.0.1. As a demonstration the bits of 2nd and 3rd bytes of the IP address were flipped to make up the “72” substring:
After the edited request was forwarded to the server connection attempts to the 172.0.0.1 address were registered by the Wireshark network analyzer. Since this host didn’t exist the connection timed out and the client reported an error. After the error was dismissed the client displayed a login window containing the modified IP address.
3. Figure Results of bit-flipping at client and server side
This proof-of-concept attack demonstrates that the encryption scheme is vulnerable to bit-flipping attacks independently from the strength of the key-exchange protocol phase.
2.2.4. Insecure password handling
In section 2.2.3 the database user and the password were configured server side, but the modified packet containing the same information was sent by the client. This means that Oracle Forms sends the configured database credentials to its clients. This creates a security weakness that allows unauthenticated attackers to obtain the configured password. The credentials are sent to the client in response to a request similar to the following:
GET
/forms/lservlet;jsessionid=sessionid?ifcmd=getinfo&ifhost=hostname&ifip=192.168.1.1 HTTP/1.0
The response contains the credentials encoded with the DEFLATE algorithm (Deutsch, 1996). The response body can be decoded using the zlib library (Akira, 2010):
Further research is required to assess if other configuration parameters sent by the client can be used to conduct attacks.
### 2.2.5. Brute-force
If an attacker can intercept only a subset of encrypted HTTP messages (e.g. when performing an ARP poisoning attack (Jeff King, 2010)) she can’t rely on the weak key exchange scheme as the required key material is not necessarily included in the captured packets.
Since the key size of the protocol is effectively 32-bits (see 2.1.1) brute-force attacks against the captured ciphertext can be practical. As a proof of concept a primitive Java program (`OracleFormsBruteForce.java`) was built that decrypts a given ciphertext with every possible key and detects if the resulting plaintext is a valid Oracle Forms Message. This program is suboptimal as it is written in a high level language, runs only a single thread, decrypts the whole message every time, etc. Still, the program performs around 180,000 tries per second on a dual core Intel Core i5-520M (2.4 GHz) CPU. At this speed a valid key is found from the 32-bit key-space in 3.3 hours on average.
The average attack speed could probably be reduced to minutes on average hardware but further optimizations were not made as the approach has serious practical limitations: The assumption that the attacker doesn’t intercept the communication from the beginning implies that the attacker doesn’t know which part of the stream she is intercepting. To measure the effect of this uncertainty a modified version of the brute-force program was built that predicts parts of the keystream assuming that the intercepted ciphertext begins with a known 3-byte header (see 2.1.2). RC4 works by generating a pseudo-random byte stream (the keystream) that is XOR-ed with the bytes of the plaintext. To predict part of the matching keystream the first 3 bytes of the ciphertext is XOR-ed with the presumed plaintext header bytes. Then an $n$ byte long keystream is generated in which the program looks for the predicted byte triplet for every possible key. It is assumed that the attacker knows which part of the cipher stream she intercepted with
$n$ byte precision. If the program finds a match it tries to use the keystream from the matching offset to decrypt the whole ciphertext. If the decryption results in a valid protocol message the key (and the matching offset) is found. The following diagram shows how the performance of brute-force search is impacted as $n$ grows:

4. **Figure Brute-force performance in unsynchronized state**
The diagram shows that losing synchronization with the stream cipher has serious impact on the performance of a brute-force attack. Such an attack may be feasible if the average sessions are relatively short or if the attacker can obtain additional information about the state of the cipher.
It’s worth noting that this is not just a limitation for the attacker: In case of network problems legitimate Oracle Forms peers can easily lose sync.
### 2.3. Test Automation
Testing is a fundamental part of any software development process, that helps eliminate a wide range of errors from simple usability issues to critical vulnerabilities. Although manual testing can’t be avoided test automation is fundamental to maintain large software projects. In terms of security it is crucial to be able to effectively run a high number of test cases (like generic patterns of injection attacks) in a black-box manner.
In the case of Oracle Forms this effort is hindered by several factors: The application protocol is undocumented, the communication is encrypted and the testing tools provided by the vendor are integrated strongly with its proprietary tools (Oracle Corporation, 2009.).
The results of reverse engineering described in section 2.1 makes it possible to create tools which can interact with Oracle Forms applications without having access to the server-side components or source code. A Proof of Concept extension, OracleFormsTester was built for Burp Suite Professional that demonstrates the practical application of the revealed details and also the challenges of the automated testing.
2.3.1. Test software design
The main automated testing component of Burp Suite is the Scanner that works by taking HTTP requests issued by the browser and replays them with modified content. The modifications are based on a large set of rules which define several payloads to be inserted as part of HTTP (or REST, etc.) parameters. To achieve similar result in case of Oracle Forms the testing software has to:
1. Intercept the traffic of the Oracle Forms applet
2. Decrypt the intercepted request
3. Identify potential insertion points (Message parsing)
4. Insert payload (Message serialization)
5. Encrypt and resend the modified request
Sample traffic can be easily intercepted by setting the proxy server used by the Java runtime to the address of an intercepting proxy (like Burp Suite). In the case of a raw TCP channel the routing table of the client can be modified to run traffic through an intercepting network node.
While both cryptographic operations and Message handling tasks may seem challenging the Java technology allows easy code reuse: During normal operation the client-side part of the Oracle Forms framework (frmall.jar) is sent to the client for use as part of the applet. The downloaded archive contains all code that is required to parse and create valid protocol messages. The archive can be saved and reused in custom programs by simply adding it to the Java classpath.
2.3.2. Decryption and Encryption
Decryption and encryption is implemented in the EncryptedInputStream and EncryptedOutputStream classes of the oracle.forms.net package. The classes are inherited from “filter” stream classes (FilterInputStream, FilterOutputStream) which can serve as a transformation layer for other basic streams (like data or byte streams). Since a standard cipher is employed, independent implementations can also be used without modification. The encryption key can be trivially derived from the intercepted traffic.
The following screenshots demonstrate successful decryption of Oracle Forms Messages through the Message Editor Tab introduced by OracleFormsTester:
5. Figure Intercepted request (encrypted)
```
7 http://172.16.110.143:80... POST /forms/servlet;sessionid=ae105e622b67f18f5b6edf43b5d12f1f7c34f14
8 http://172.16.110.143:80... POST /forms/servlet;sessionid=ae105e622b67f18f5b6edf43b5d12f1f7c34f14
9 http://172.16.110.143:80... POST /forms/servlet;sessionid=ae105e622b67f18f5b6edf43b5d12f1f7c34f14
10 http://172.16.110.143:80... POST /forms/servlet;sessionid=ae105e622b67f18f5b6edf43b5d12f1f7c34f14
11 http://172.16.110.143:80... POST /forms/servlet;sessionid=ae105e622b67f18f5b6edf43b5d12f1f7c34f14
12 http://172.16.110.143:80... POST /forms/servlet;sessionid=ae105e622b67f18f5b6edf43b5d12f1f7c34f14
```
6. Figure Decrypted body of intercepted request
```
```
Both the HTTP protocol and the Message format are stateless: valid messages can be constructed independently from each other. Encryption however requires the internal states of the ciphers at client and server side to be synchronized. This means that issuing new messages independently from the client makes the applet unable to communicate with the server. Also, corrupted messages (like heartbeats) sent by the client can
desynchronize the testing tool. Both problems can be solved by shutting down the client as the testing starts. A more advanced solution is to block the traffic of the client while the tests run and then setting the internal state of the cipher appropriately via a debugger.
In case of OracleFormsTester every decrypted request body and the corresponding cipher state are stored in a HashMap indexed by the hash of the encrypted body. This way plaintext messages can be found for every intercepted ciphertext. The recoded plaintext messages are also used to select all String properties as Scanner insertion points. When the Scanner runs the Messages are deserialized (see section 2.3.3), the insertion points are replaced with the upcoming payload then the Message is serialized and encrypted again.
2.3.3. Message parsing and serialization
Message parsing and serialization are performed by the writeDetails() and readDetails() methods of the Message class. These implementations can be easily reused, but successful message parsing requires taking care of two important details. First, the readDetails() method should be invoked with a valid String array as its second argument to support String caching (see section 2.1.2). Second, when decrypting Messages the EncryptedInputStream(InputStream, boolean) constructor should be used with the second argument set to true to initialize deciphering objects. This is important because the single argument constructor declares only a 7800 bytes long internal buffer for the stream while the two argument version declares 8192 bytes. Since some initial protocol messages usually exceed 7800 bytes they can’t be read in a single run with the smaller buffer.
Successful Message parsing can be confirmed in the debug output of the OracleFormsTester extension:
Oracle Forms Tester loaded
Found GDay!
Found GDay! f1dbf270
Found Mate! 00000390
RC4 Key: f239aedb00
[...]
Encrypted Request: 77d4f3cdc03b22d2beb9ed71506fd22b5a9b50757516f1b341423f546127b524d4189c15afa244dabe0eb0e6d
Body: 12 byte(s)
Property 0: 137 Type: 9
--- Value: java.awt.Point[x=900,y=700]
2.4. Deployment Best Practices
The attacks presented in section 2.2 showed that despite the encryption the security of the Oracle Forms protocol is nearly equivalent to a plain text channel. The transferred data should be protected with standard technologies like TLS or IPsec.
Oracle Forms shouldn’t be configured with database access credentials since this information is sent to the clients as described in section 2.2.4. The userid parameter should be set empty in the formsweb.cfg configuration file (Oracle Corporation, 2006). Database passwords of existing installations with similar configuration should be considered compromised.
3. Conclusion
This paper gave detailed overview on the communication protocol of Oracle Forms and presented tools and methods to support automated testing. Multiple attacks were presented which prove that the protective features of the framework give a false sense of security as they can be circumvented easily.
Although Oracle Forms is meant to be a tool for Rapid Application Development (Oracle Corporation, 2009) it hinders testing – one of the most important steps of development – considerably. While test automation is possible, complex tools must be built to support Oracle Forms. This property doesn’t only affect the security of Oracle Forms based applications but the general quality of them too. A symptom of this inadequacy is that this research revealed fundamental security weaknesses which would likely have been found and mitigated in time in case of a more open technology.
The presented results confirm the opinion that Oracle Forms is an obsolete technology and applications should be replaced (Berg, 2013). Hopefully the provided information will help assessing the quality of the live applications and migrating them to more modern platforms. Further research is required to determine if the design of Oracle Forms introduce security risks that may affect a wider range of applications.
References
http://stackoverflow.com/a/8326032
http://whywebsphere.com/2013/02/25/the-oracle-forms-dilemma-part-1/
White Box Testing: https://buildsecurityin.us-cert.gov/articles/best-practices/white-box-testing/white-box-testing
https://threatpost.com/javas-losing-security-legacy/102176
Author Name, email@address
Automated Security Testing of Oracle Forms Applications
Application Server:
http://docs.oracle.com/cd/A97338_01/doc/forms.6i/a83591/chap10.htm
Oracle Corporation. (2006.). *Configuration Files.* Forrás: Oracle Application Server
Forms Services Deployment Guide 10g Release 2 (10.1.2):
https://docs.oracle.com/cd/B14099_19/web.1012/b14032/basics002.htm
Retrieved from Oracle Application Testing Suite OpenScript User's Guide:
https://docs.oracle.com/cd/E25291_01/doc.900/e15488/opscript_using_offt_module.htm
Forrás: Connection and Invocation Details: https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html
*Oracle Forms.* (n.d.).
Retrieved 03 15, 2015, from
http://www.oracle.com/technetwork/developer-tools/forms/overview/index-098877.html
Forrás: OWASP Testing Guide v4:
https://www.owasp.org/index.php/Appendix_A:_Testing_Tools
Forrás:
Author Name, email@address
Appendix
Decompiled RC4 pseudo-code
Relevant methods decompiled from oracle.forms.net.EncryptedInputStream:
```java
public synchronized void setEncryptKey(byte abyte0[]) {
if(abyte0 == null || abyte0.length == 0 || abyte0.length > 256)
throw new RuntimeException();
mSeedBuffer = new int[256];
mI = mJ = 0;
for(int i = 0; i < 256; i++)
mSeedBuffer[i] = i;
int l;
int k = l = 0;
for(int j = 0; j < 256; j++)
{
l = (l + (abyte0[k] & 0xff) + mSeedBuffer[j]) % 256;
int i1 = mSeedBuffer[j];
mSeedBuffer[j] = mSeedBuffer[l];
mSeedBuffer[l] = i1;
k = (k + 1) % abyte0.length;
}
}
public synchronized int read() throws IOException
{
if(mPos == mLength)
{
fill();
if(mPos == mLength)
```
```
return -1;
return mBuf[mPos++] & 0xff;
}
public synchronized int read(byte abyte0[], int i, int j)
throws IOException
{
if(mPos + j > mLength)
{
fill();
if(mPos >= mLength)
return -1;
}
int k = mLength - mPos;
k = k >= j ? j : k;
System.arraycopy(mBuf, mPos, abyte0, i, k);
mPos += k;
return k;
}
private void fill()
throws IOException
{
int i;
if(mPos == mLength)
{
mLength = 0;
i = 0;
}
else
{
mLength = mLength - mPos;
System.arraycopy(mBuf, mPos, mBuf, 0, mLength);
i = mLength;
}
int j = mIstream.read(mBuf, i, mBuf.length - i);
```
if (j > 0) {
mLength += j;
mBytesCount += j;
}
mPos = 0;
int ai[] = mSeedBuffer;
if (ai != null) {
int k = mI;
int l = mJ;
for (int il = i; il < mLength; il++) {
k = (k + 1) % 256;
l = (ai[k] + l) % 256;
int j1 = ai[k];
ai[k] = ai[l];
ai[l] = j1;
mBuf[il] ^= ai[(ai[k] + j1) % 256];
}
mI = k;
mJ = l;
}
}
## Upcoming SANS Penetration Testing
<table>
<thead>
<tr>
<th>Course</th>
<th>Location</th>
<th>Dates</th>
<th>Platform</th>
</tr>
</thead>
<tbody>
<tr>
<td>Autumn Australia Live Online 2020</td>
<td>Australia</td>
<td>May 18, 2020 - May 29, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Live Online - SEC560: Network Penetration Testing and Ethical Hacking</td>
<td>United Arab Emirates</td>
<td>May 18, 2020 - Jun 06, 2020</td>
<td>vLive</td>
</tr>
<tr>
<td>Live Online - SEC504: Hacker Tools, Techniques, Exploits, and Incident Handling</td>
<td>United Arab Emirates</td>
<td>May 19, 2020 - Jun 06, 2020</td>
<td>vLive</td>
</tr>
<tr>
<td>Live Online - SEC660: Advanced Penetration Testing, Exploit Writing, and Ethical Hacking</td>
<td>United Arab Emirates</td>
<td>May 26, 2020 - Jul 02, 2020</td>
<td>vLive</td>
</tr>
<tr>
<td>2-Day Firehose Training</td>
<td>May 26</td>
<td>May 26, 2020 - May 29, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS Cybersecure Catalyst Women Academy SEC504</td>
<td>Brampton, ON</td>
<td>Jun 01, 2020 - Jun 06, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS-Cybersecure Catalyst New Canadians Academy SEC504</td>
<td>Brampton, ON</td>
<td>Jun 01, 2020 - Jun 06, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Instructor-Led Training</td>
<td>Jun 1</td>
<td>Jun 01, 2020 - Jun 06, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS-Cybersecure Catalyst New Career Academy SEC504</td>
<td>Brampton, ON</td>
<td>Jun 01, 2020 - Jun 06, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Pen Test HackFest & Cyber Ranges Summit</td>
<td>Virtual - US Mountain,</td>
<td>Jun 04, 2020 - Jun 13, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Pacific Live Online 2020</td>
<td>Singapore</td>
<td>Jun 08, 2020 - Jun 19, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANSFIRE 2020</td>
<td>DC</td>
<td>Jun 13, 2020 - Jun 20, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Cyber Defence Australia Online 2020</td>
<td>Australia</td>
<td>Jun 22, 2020 - Jul 04, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Instructor-Led Training</td>
<td>Jun 22</td>
<td>Jun 22, 2020 - Jun 27, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Japan Live Online July 2020</td>
<td>Japan</td>
<td>Jun 29, 2020 - Jul 11, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>2-Day Firehose Training</td>
<td>Jul 1</td>
<td>Jul 01, 2020 - Jul 02, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Live Online - SEC660: Advanced Penetration Testing, Exploit Writing, and Ethical Hacking</td>
<td>United Arab Emirates</td>
<td>Jul 06, 2020 - Aug 12, 2020</td>
<td>vLive</td>
</tr>
<tr>
<td>SANS Summer Surge</td>
<td>Wave 1</td>
<td>VA</td>
<td>Jul 06, 2020 - Jul 11, 2020</td>
</tr>
<tr>
<td>SANS Pen Test Hackfest Training 2020</td>
<td>United Arab Emirates</td>
<td>Jul 06, 2020 - Jul 11, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SEC617 Asia Pacific Online 2020</td>
<td>Australia</td>
<td>Jul 13, 2020 - Jul 18, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS-Cybersecure Catalyst Women Academy SEC504</td>
<td>Brampton, ON</td>
<td>Jul 20, 2020 - Jul 25, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS Cybersecure Catalyst New Canadian Academy SEC504</td>
<td>Brampton, ON</td>
<td>Jul 20, 2020 - Jul 25, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Attack the Summer - SANS Live Online 2020</td>
<td>United Arab Emirates</td>
<td>Jul 20, 2020 - Jul 31, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Rocky Mountain Summer 2020</td>
<td>CO</td>
<td>Jul 20, 2020 - Jul 25, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS-Cybersecure Catalyst New Career Academy SEC504</td>
<td>Brampton, ON</td>
<td>Jul 20, 2020 - Jul 25, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Summer Surge</td>
<td>Wave 2</td>
<td>NC</td>
<td>Jul 27, 2020 - Aug 01, 2020</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://pen-testing.sans.org/resources/papers/gwapt/uninitialized-memory-disclosures-web-applications-129059", "len_cl100k_base": 9318, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 59097, "total-output-tokens": 11587, "length": "2e13", "weborganizer": {"__label__adult": 0.0006723403930664062, "__label__art_design": 0.0005097389221191406, "__label__crime_law": 0.0070953369140625, "__label__education_jobs": 0.0022830963134765625, "__label__entertainment": 0.00016605854034423828, "__label__fashion_beauty": 0.0002758502960205078, "__label__finance_business": 0.0005249977111816406, "__label__food_dining": 0.0004467964172363281, "__label__games": 0.0018520355224609375, "__label__hardware": 0.00350189208984375, "__label__health": 0.00091552734375, "__label__history": 0.0004248619079589844, "__label__home_hobbies": 0.00016057491302490234, "__label__industrial": 0.0011272430419921875, "__label__literature": 0.0003690719604492187, "__label__politics": 0.00051116943359375, "__label__religion": 0.0005698204040527344, "__label__science_tech": 0.1878662109375, "__label__social_life": 0.0001900196075439453, "__label__software": 0.05487060546875, "__label__software_dev": 0.734375, "__label__sports_fitness": 0.0005831718444824219, "__label__transportation": 0.0006036758422851562, "__label__travel": 0.0002199411392211914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42758, 0.0534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42758, 0.42283]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42758, 0.81903]], "google_gemma-3-12b-it_contains_pii": [[0, 305, false], [305, 1426, null], [1426, 3520, null], [3520, 5626, null], [5626, 7281, null], [7281, 9663, null], [9663, 11674, null], [11674, 13875, null], [13875, 15974, null], [15974, 18248, null], [18248, 19041, null], [19041, 20017, null], [20017, 22158, null], [22158, 23502, null], [23502, 25591, null], [25591, 27417, null], [27417, 29521, null], [29521, 31476, null], [31476, 33471, null], [33471, 35443, null], [35443, 36083, null], [36083, 36884, null], [36884, 37574, null], [37574, 37959, null], [37959, 42758, null]], "google_gemma-3-12b-it_is_public_document": [[0, 305, true], [305, 1426, null], [1426, 3520, null], [3520, 5626, null], [5626, 7281, null], [7281, 9663, null], [9663, 11674, null], [11674, 13875, null], [13875, 15974, null], [15974, 18248, null], [18248, 19041, null], [19041, 20017, null], [20017, 22158, null], [22158, 23502, null], [23502, 25591, null], [25591, 27417, null], [27417, 29521, null], [29521, 31476, null], [31476, 33471, null], [33471, 35443, null], [35443, 36083, null], [36083, 36884, null], [36884, 37574, null], [37574, 37959, null], [37959, 42758, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42758, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42758, null]], "pdf_page_numbers": [[0, 305, 1], [305, 1426, 2], [1426, 3520, 3], [3520, 5626, 4], [5626, 7281, 5], [7281, 9663, 6], [9663, 11674, 7], [11674, 13875, 8], [13875, 15974, 9], [15974, 18248, 10], [18248, 19041, 11], [19041, 20017, 12], [20017, 22158, 13], [22158, 23502, 14], [23502, 25591, 15], [25591, 27417, 16], [27417, 29521, 17], [29521, 31476, 18], [31476, 33471, 19], [33471, 35443, 20], [35443, 36083, 21], [36083, 36884, 22], [36884, 37574, 23], [37574, 37959, 24], [37959, 42758, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42758, 0.17486]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
540817aacdc19c7b122095a8f2017fe597601dc8
|
High-Performance Analysis of Filtered Semantic Graphs
Aydin Buluc
Armando Fox
John Gilbert
Shoaib Ashraf Kamil
Adam Lugowski
Leonid Oliker
Samuel Williams
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2012-61
http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-61.html
May 6, 2012
### High-Performance Analysis of Filtered Semantic Graphs
**Abstract**
High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry attributes of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry a performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++/MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance. We also present a new rooine model for graph traversals, and show that our high-performance implementations do not significantly deviate from the rooine.
### Subject Terms
- Semantic graph analysis
- High-performance computing
- Filtered graph traversal
- Python programming
- Knowledge Discovery Toolbox (KDT)
- Selective Embedded JIT Specialization (SEJITS)
- Combinatorial BLAS
High-Performance Analysis of Filtered Semantic Graphs
Aydin Buluç‡∗
abuluc@lbl.gov
Shoaib Kamil‡∗
skamil@cs.berkeley.edu
Armando Fox‡
fox@cs.berkeley.edu
John R. Gilbert†
gilbert@cs.berkeley.edu
Adam Lugowski‡
alugowski@cs.ucsb.edu
Leonid Oliker‡∗
loliker@lbl.gov
Samuel Williams‡
swwilliams@lbl.gov
†Dept. of Computer Science
University of California
Santa Barbara, CA 93106
‡EECS Dept.
University of California
Berkeley, CA 94720
‡CRD
Lawrence Berkeley Nat. Lab.
Berkeley, CA 94720
ABSTRACT
High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry “attributes” of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry a performance cost.
In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ / MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance. We also present a new roofline model for graph traversals, and show that our high-performance implementations do not significantly deviate from the roofline.
1. INTRODUCTION
1.1 Motivation
Large-scale graph analytics are a central requirement of bioinformatics, finance, social network analysis, national security, and many other fields. Going beyond simple searches, analysts use high-performance computing systems to execute complex graph algorithms on large corpora of data. Often, a large semantic graph is built up over time, with the graph vertices representing entities of interest and the edges representing relationships of various kinds—for example, social network connections, financial transactions, or interpersonal contacts.
Corresponding authors.
the obvious storage problems with materialization, the time spent during materialization is typically not amortized by many graph queries because the user modifies the query (or just the filter) during interactive data analysis. The alternative is to filter edges and vertices “on the fly” during execution of the complex graph algorithm. A graph algorithms expert can implement an efficient on-the-fly filter as a set of primitive Combinatorial BLAS operations coded in C/C++; but filters written at the KDT level, as graph operations in Python, incur a significant performance penalty.
Our solution to this challenge is to apply Selective Just-In-Time Specialization techniques from the SEJITS approach [7]. We define a semantic-graph-specific filter domain-specific language (DSL), a subset of Python, and use SEJITS to implement the specialization necessary for filters written in that subset to execute as efficiently as low-level C code.
As a result, we are able to demonstrate that SEJITS technology significantly accelerates Python graph analytics codes written in KDT and running on clusters and multicore CPUs. An overview of our approach is shown in Figure 1.
Figure 2 compares the performance of four filtering implementations on a breadth-first search query in a graph with 8 million vertices and 128 million edges. The chart shows time to perform the query as we synthetically increase the portion of the graph that passes the filter on an input R-MAT [18] graph of scale 23. The top two lines are the methods implemented in the current release v0.2 of KDT [2]: slowest is materializing the subgraph before traversal, and next is on-the-fly filtering in Python. The third, red, line is our new SEJITS+KDT implementation, which shows minimal overhead and comes very close to the performance of native Combinatorial BLAS in the fourth line.
1.2 Main contributions
The primary new contributions of this paper are:
1. A system design that allows domain-expert graph analysts to describe filtered semantic graph operations in a high-level language, using KDT v0.2.
3. Experimental demonstration of excellent performance scaling to graphs with millions of vertices and hundreds of millions of edges.
5. A detailed case study of the use of algebraic semiring operations as an alternative low-level approach to filtering, using the Combinatorial BLAS.
1.3 Example of a filtered query
Here we present a simple example of a filtered query in a semantic graph. We will refer to this example through the paper, showing how the different implementations of filters express the query and comparing their performance executing it.
We consider a graph whose vertices are Twitter users, and whose edges represent two different types of relationships between users. In the first type, one user “follows” another; in the second type, one user “retweets” another user’s tweet. Each retweet edge carries as attributes a timestamp and a count. Figure 3 shows a fragment of such a graph. Our experiments are with several semantic graphs, of various sizes, constructed from publicly available data on tweets during 2009. The largest graph has about 17 million vertices and 720 million edges. Section 7 describes the datasets in more detail.
Our sample query is the one mentioned above: Given a vertex of interest, determine the number of hops required to reach each other vertex by using only retweeting edges timestamped earlier than June 30. The filter in this case is a boolean predicate on edge attributes that defines the types and timestamps of the edges to be used. The query is a breadth-first search in the graph that ignores edges that do not pass the filter.
Figure 2: Performance of a filtered BFS query, comparing four methods of implementing custom filters. The vertical axis is running time in seconds on a log scale; lower is better. From top to bottom, the methods are: materializing the filtered subgraph; on-the-fly filtering with high-level Python filters in KDT; on-the-fly filtering with high-level Python filters specialized at runtime by SEJITS+KDT (this paper’s main contribution); on-the-fly filtering with low-level C++ filters implemented as customized semiring operations and compiled into Combinatorial BLAS. The graph has 8 million vertices and 128 million edges. The runs use 36 cores (4 sockets) of Intel Xeon E7-8870 processors.
Figure 3: Graph of “following” and “retweeting” relationships. Black edges denote following, and red edges denote retweeting. Red edges are also labelled with counts and timestamps, not shown.
1.4 Outline of the paper
We first survey related work. Then, Section 3 shows how a filter can be implemented below the KDT level, as a user-specified semiring operation in the C++/MPI Combinatorial BLAS library that underlies KDT. This is a path to high performance at the cost of usability: the analyst must translate the graph-attribute definition of the filter into low-level C++ code for custom semiring scalar operations in Combinatorial BLAS.
Section 4 describes the high-level filtering facility that is new in Version 0.2 of KDT, in which filters are specified as simple Python predicates. This approach yields easy customization, and scales to many queries from many analysts without demanding correspondingly many graph programming experts; however, it poses challenges to achieving high performance.
Section 5 is the technical heart of the paper, which describes how we meet these performance challenges by selective, embedded, just-in-time specialization with SEJITS.
Section 6 proposes a theoretical model that can be used to evaluate the performance of our implementations, giving “roofline” bounds on the performance of breadth-first search in terms of architectural parameters of a parallel machine, and the permeability of the filter.
Section 7 presents our experimental results, and Section 8 gives our conclusions and some remarks on future directions and problems.
2. RELATED WORK
Graph Algorithm Packages
Pegasus [15] is a graph-analysis package that uses MapReduce [9] in a distributed-computing setting. Pegasus uses a primitive called GIM-V, much like KDT’s SpMV, to express vertex-centered computations that combine data from neighboring edges and vertices. This style of programming is called “think like a vertex” in Pregel [21], a distributed-computing graph API. Both of these systems require the application to be written in a low-level language (Java and C++, respectively) and neither has filter support.
Other libraries for high-performance computation on large-scale graphs include the Parallel Boost Graph Library [12], the Combinatorial BLAS [5], Georgia Tech’s SNAP [3], and the Multithreaded Graph Library [4]. These are all written in C/C++ and do not include explicit filter support.
The first two support distributed memory as well as shared memory while the latter two require a shared address space. SPARQL [23] is a query language for Resource Description Framework (RDF) [16] that can support semantic graph database queries. The existing database engines that implement SPARQL and RDF support filtering based queries efficiently but they are currently not suitable for running traversal based tightly-coupled graph computations scalably in parallel environments.
The closest previous work is Green Marl [13], a domain specific language (DSL) for small-world graph exploration that runs on GPUs and multicore CPUs without support for distributed machines.
JIT Compilation of DSLs
Embedded DSLs [10] for domain-specific computations have a rich history, including DSLs that are compiled instead of interpreted [17]. Abstract Syntax Tree introspection for such DSLs has been used most prominently for database queries in ActiveRecord [1], part of the Ruby on Rails framework.
The approach applied here, which uses AST introspection combined with templates, was first applied to stencil algorithms and data parallel constructs [7], and subsequently to a number of domains including linear algebra and Gaussian mixture modeling [14].
3. FILTERS AS SCALAR SEMIRING OPS
The Combinatorial BLAS (CombBLAS for short) views graph computations as sparse matrix computations using various algebraic semirings (such as the tropical (min,+) semiring for shortest paths, or the real (+,*) semiring/field for numerical computation). The expert user can define new semirings and operations on them in C++ at the CombBLAS level, but most KDT users do not have the expertise for this.
Two fundamental kernels in CombBLAS, sparse matrix-vector multiplication (SpMV) and sparse matrix-matrix multiplication (SpGEMM), both explore the graph by expanding existing frontier(s) by a single hop. The semiring scalar multiply operation determines how the data on a sequence of edges are combined to represent a path, and the semiring scalar add operation determines how to combine two or more parallel paths. In a similar framework, Pegasus [15], semiring multiply is referred to as combine2 and semiring add is referred to as combineAll, followed by an assign operation. However, Pegasus’s operations lack the algebraic completeness of CombBLAS’s semiring framework.
Filters written as semiring operations in C++ can have high performance because the number of calls to the filter operations is asymptotically the same as the minimum necessary calls to the semiring scalar multiply operation, and the filter itself is a local operation that uses only the data on one edge. The filtered multiply returns a “null” object (formally, the semiring’s additive identity or SAID) if the predicate is not satisfied.
For example, Figure 4 shows the scalar multiply operation for our running example of BFS on a Twitter graph. The usual semiring multiply for BFS is select2nd, which returns the second value it is passed; the multiply operation is modified to only return the second value if the filter succeeds. At the lowest levels of SpMV, SpGEMM, and the other CombBLAS primitive, the return value of the scalar multiply is checked against SAID, the additive identity of the semiring (in this example, the default constructed ParentType object is the additive identity), and the returned object is retained only if it doesn’t match the SAID.
4. KDT FILTERS IN PYTHON
The Knowledge Discovery Toolbox [19, 20] is a flexible open-source toolkit for implementing complex graph algorithms and executing them on high-performance parallel computers. KDT is targeted at two classes of users: domain-expert analysts who are not graph experts and who use KDT primarily by invoking existing KDT routines from Python, and graph-algorithm developers who use KDT primarily by writing Python code that invokes and composes KDT’s computational primitives. These computational primitives are supplied by a parallel backend, the Combinatorial BLAS [5], which is written in C++ with MPI for high performance.
4.1 Filter semantics
In KDT, any graph algorithm can be performed with an edge filter. A filter is a unary predicate on an edge that returns true if the edge is to be considered, or false if it is to be ignored. The KDT user writes a filter predicate as a Python function or lambda expression of one input that returns a boolean value; Figure 5 is an example.
Using a filter does not require any change in the code for the graph algorithm. For example, KDT code for betweenness centrality or for breadth-first search is the same whether or not the input semantic graph is filtered. This works because the filtering is done in the low-level primitives; user code remains ignorant of filters. Our design allows all current and future KDT algorithms to support filters without any extra effort required on the part of the algorithm designer.
Since filtered graphs behave just like unfiltered ones, it is possible in KDT to add another filter to an already filtered graph. The result is a nested filter whose predicate is a lazily-evaluated logical and of the individual filter predicates. Filters are evaluated in the order they are added. This allows both end users and algorithm designers to use filters for their own purposes without having to worry about each other.
4.2 Materializing filters and on-the-fly filters
KDT supports two approaches for filtering semantic graphs:
- **Materializing filter**: When a filter is placed on a graph (or matrix or vector), the entire graph is traversed and a copy is made that includes only the edges that pass the filter. We refer to this approach as materializing the filtered graph.
- **On-the-fly filter**: No copy of the graph/matrix/vector is made. Rather, every primitive operation (e.g. semiring scalar multiply and add) applies the filter to its inputs when it is called. Roughly speaking, every primitive operation accesses the graph through the filter and behaves as if the filtered-out edges were not present.
Both materializing and on-the-fly filters have their place; neither is superior in every situation. For example, materialization may be more efficient when a user wants to run many analyses on a well-defined small subset of a large graph. On the other hand, materialization may be impossible if the graph already fills most of memory; and materialization may be much more expensive than on-the-fly filtering for a query whose filter restricts it to a localized neighborhood and thus does not even touch most of the graph. Indeed, an analyst who needs to modify and fine-tune a filter while exploring data may not be willing to wait for materialization at every step of the way.
The focus of this paper is on-the-fly filtering and how to make it more efficient, though our experiments do include comparisons with materializing filters.
4.3 Implementation details
Filtering a semiring operation requires the semiring scalar multiply to be able to return “nothing”, in the sense that the result should be the same as if the multiply had never happened. In semiring terms, this means that the multiply operation must return the semiring’s additive identity (SAID for short). CombBLAS treats the additive identity SAID the same as any other value. However, CombBLAS uses a sparse data structure to represent a graph as an adjacency matrix—and, formally speaking, SAID is the implicit value of any matrix entry that is not stored explicitly.
CombBLAS ensures that SAID is never stored as an explicit value in a sparse structure. (This corresponds to Matlab’s convention that explicit zeros are never stored in sparse matrices [11], and differs from the convention in the CSparse sparse matrix package [8].) Note that SAID need not be “zero”: for example, in the min-plus semiring used for shortest path computations, SAID is ∞. Indeed, it is possible for a single graph or matrix to be used with different underlying semirings whose operations use different SAIDs.
We benchmarked several approaches to representing, manipulating, and returning SAID values from semiring scalar operations. In the end, we decided that the basic scalar operations would include a returnedSAID() predicate, which can be called after the scalar operation, and that KDT would not have an explicit representation of a SAID value.
The result is a clean implementation of on-the-fly filters: filtered semiring operations just require a shim in the multiply() function that causes returnedSAID() to return true if the value is filtered; the lower-level algorithms call this function after performing the scalar multiply operation.
5. SEJITS AND FILTERS
In order to mitigate the slowdown caused by defining semirings in Python, which results in a serialized upcall into Python for each operation, we opt to instead use the Selective Embedded Just-In-Time Specialization (SEJITS) approach [7]. By defining an embedded DSL for KDT filters, and then translating it to C++, we can avoid performance
penalties while still allowing users the flexibility to specify filters in Python. We use the Asp\textsuperscript{1} framework to implement our DSL.
Our approach is shown in Figure 6. In the usual KDT case, filters are written as simple Python functions. Since KDT uses Combinatorial BLAS at the low level to perform graph operations, each operation at the Combinatorial BLAS level must check to see whether the vertex or edge should be taken into account, requiring a per-vertex or per-edge upcall into Python. Furthermore, since Python is not thread-safe, this essentially serializes the computation in each MPI process.
In this work, we define an embedded domain specific language for filters, and allow users to write their filters in this DSL, expressed as a subset of Python with normal Python syntax. Then, at instantiation, the filter source code is introspected to get the Abstract Syntax Tree (AST), and then is translated into low-level C++. Subsequent applications of the filter use this low-level implementation, sidestepping the serialization and cost of upcalling into Python.
In the next section, we define our domain-specific language and show several examples of filters written in Python.
5.1 Semantic Model for Filters
In our approach, we first define the semantic model of filters, which is the intermediate form of our DSL. The semantic model expresses the semantics of filters. After defining this, we then map pure-Python constructs to constructs in the semantic model. It is this pure-Python mapping that users use to write their filters.
In defining the semantic model, we must look at what kinds of operations filters perform. In particular, vertex and edge filters are functions that take in one or two inputs and return a boolean. Within the functions, filters must allow users to inspect fields of the input data types, do comparisons, and perhaps perform arithmetic with fields. In addition, we want to (as much as possible) prevent users from writing filters that do not conform to our assumptions; although we could use analysis for this, it is much simpler to construct the language in a manner that prevents users from writing non-conformant filters. If the filter does not fit into our language, we run it in the usual fashion, by doing upcalls into pure Python. Thus, if the user writes their filters correctly, they achieve fast performance, otherwise the user experience is no worse than before—the filter still runs, just not at fast speed.
The semantic model is shown in Figure 7. We have constructed this to make it easy to write filters that are “correct-by-construction;” that is, if they fit into the semantic model, they follow the restrictions of what can be translated. For example, we require that the return be provably a boolean (by forcing the BoolReturn node to have a boolean body), and that there is either a single input or two inputs (either UnaryPredicate or BinaryPredicate).
Given the semantic model, now we define a mapping from Python syntax to the semantic model.
5.2 Python Syntax for the Filter DSL
Users of KDT are not exposed to the semantic model. Instead, the language they use to express filters in our DSL is a subset of Python, corresponding to the supported operations. Informally, we specify the language by talking about what a filter can do: namely, a filter takes in one or two inputs (that are of pre-defined edge/vertex types), must return a boolean, and is allowed to do comparisons, accesses, and arithmetic on immediate values and edge/filter instance variables. In addition, to facilitate translation, we require that a filter be an object that inherits from the PerFilter Python class, and that the filter function itself is a member function called filter.
The example KDT filter from Figure 5 is presented in SEJITS syntax in Figure 8. Note that because a filter cannot call a function, we must use immediate values for checking the timestamp. However, even given our relatively restricted syntax, users can specify a large class of useful filters in our DSL. In addition, if the filter does not fit into our DSL, it is still executed using the slower upcalls to pure Python after
---
1URL blinded for submission
class MyFilter(PcbFilter):
def filter(e):
# if it is a retweet edge
if e.isRetweet and
# and it is before June 30
e.latest < JUNE_30:
return True
else:
return False
Figure 8: Example of an edge filter that the translation system can convert from Python into fast C++ code.
Table 1: Overheads of using the filtering DSL.
<table>
<thead>
<tr>
<th></th>
<th>First Run</th>
<th>Subsequent</th>
</tr>
</thead>
<tbody>
<tr>
<td>Codegen</td>
<td>0.0545 s</td>
<td>0 s</td>
</tr>
<tr>
<td>Compile</td>
<td>4.21 s</td>
<td>0 s</td>
</tr>
<tr>
<td>Import</td>
<td>0.032 s</td>
<td>0.032 s</td>
</tr>
</tbody>
</table>
issuing a warning to the user.
5.3 Implementation in C++
We modify the normal KDT C++ filter objects, which are instantiated with pointers to Python functions, by adding a function pointer that is checked before executing the upcall to Python. This function pointer is set by our translation machinery to point to the translated function in C++. When executing a filter, the pointer is first checked, and if non-null, directly calls the appropriate function.
Compared to Combinatorial BLAS, at runtime we have the additional sources of overheads relating to the null check and function pointer call. However, relative to the non-translated KDT machinery, these are trivial costs for filtering, particularly compared to the penalty of upcasing into Python.
Overheads of code generation are shown in Table 1. On first running using a particular filter, the DSL infrastructure translates and compiles the filter in C++. Most of the time here is spent calling the external C++ compiler, which is not optimized for speed. Subsequent calls only incur the penalty of Python’s import statement, which loads the cached library.
6. A ROOFLINE MODEL OF BFS
In this section, we extend the Roofline model [24] to quantify the performance bounds of BFS as a function of optimization and filter success rate. The Roofline model is a visually intuitive representation of the performance characteristics of a kernel on a specific machine. It uses bound and bottleneck analysis to delineate performance bounds arising from bandwidth or compute limits. In the past, the Roofline model has primarily been used for kernels found in high-performance computing. These kernels tend to express performance in floating-point operations per second and are typically bound by the product of arithmetic intensity (flops per byte) and STREAM [22] (long unit-stride) bandwidth. In the context of graph analytics, none of these assumptions hold.
In order to model BFS performance, we decouple in-core compute limits (filter performance as measured in processed edges per second) from memory access performance. The in-core filter performance limits were derived by extracting the relevant CombBLAS, KDT, and SEJITS+KDT versions of the kernels and targeting arrays that fit in each core’s cache. We run the edge processing inner kernels 10000 times (as opposed to once) to obfuscate any memory system related effects to get the in-core compute limits.
Analogous to arithmetic intensity, we can quantify the average number of bytes we must transfer from DRAM per edge we process — bytes per processed edge. In the following analysis, the indices are 8 bytes and the edge payload is 16 bytes. BFS exhibits three memory access patterns. First, there is a unit-stride streaming access pattern arising from access of vertex pointers (this is amortized by degree) as well as the creation of a sparse output vector that acts as the new frontier (index, parent’s index). The latter incurs 32 bytes of traffic per traversed edge in write-allocate caches assuming the edge was not filtered. Second, access to the adjacency list follows a stanza-like memory access pattern. That is, small blocks (stanzas) of consecutive elements are fetched from effectively random locations in memory. These stanzas are typically less than the average degree. This corresponds to approximately 24 bytes (16 for payload and 8 for index) of DRAM traffic per processed edge. Finally, updates to the list of visited vertices and the indirections when accessing the graph data structure exhibit a memory access pattern in which effectively random 64-bit elements are updated (assuming the edge was not filtered). Similarly, each visited vertex generates 24 bytes of random access traffic to follow indirections on the graph structure before being able to access its edges. In order to quantify these bandwidths, we wrote a custom version of STREAM that provides stanza-like memory access patterns (read or update) with spatial locality varying from 8 bytes (random access) to the size of the array (STREAM).
The memory bandwidth requirements depend on the number of edges processed (examined), number of edges traversed (that pass the filter), and the number of vertices in the frontier over all iterations. For instance, an update to the list of visited vertices only happens if the edge actually passes the filter. Typically, the number of edges traversed is roughly equal to the permeability of the filter times the number of edges processed. To get a more accurate estimate, we collected statistics from one of the synthetically generated R-MAT graphs that are used in our experiments. These statistics are summarized in Table 2. Similarly, we quantify the volume of data movement by operation and memory access type (random, stanza-like, and streaming) noting the corresponding bandwidth on Mirasol, our Intel Xeon E7-8870 test system (see Section 7), in Table 3. Combining Tables 2 and 3, we calculate the average number of processed edges per second as a function of filter permeability by summing data movement time by type and inverting.
Figure 9 presents the resultant Roofline-inspired model for Mirasol. Note that these are all upper bounds on the best performance achievable and the underlying implementation might incur additional overheads from internal data structures, MPI buffers, etc. For example, it is common to locally sort the discovered vertices to efficiently merge them later in the incoming processor; an overhead we do not account for as it is not an essential step of the algorithm.
As the Roofline model selects ceilings by optimization, and bounds performance by their minimum, we too may select a filter implementation (pure Python KDT, SEJITS+KDT,
Table 2: Statistics about the filtered BFS runs on the R-MAT graph of Scale 23 (M: million)
<table>
<thead>
<tr>
<th>Filter permeability</th>
<th>Vertices visited</th>
<th>Edges traversed</th>
<th>Edges processed</th>
</tr>
</thead>
<tbody>
<tr>
<td>1%</td>
<td>655,904</td>
<td>2.5 M</td>
<td>213 M</td>
</tr>
<tr>
<td>10%</td>
<td>2,204,599</td>
<td>25.8 M</td>
<td>250 M</td>
</tr>
<tr>
<td>25%</td>
<td>3,102,515</td>
<td>64.6 M</td>
<td>255 M</td>
</tr>
<tr>
<td>100%</td>
<td>4,607,907</td>
<td>258 M</td>
<td>258 M</td>
</tr>
</tbody>
</table>
Table 3: Breakdown of the volume of data movement by memory access pattern and operation.
<table>
<thead>
<tr>
<th>Memory access type</th>
<th>Vertices visited</th>
<th>Edges traversed</th>
<th>Edges processed</th>
<th>Bandwidth on Mirasol</th>
</tr>
</thead>
<tbody>
<tr>
<td>Random</td>
<td>24 bytes</td>
<td>8 bytes</td>
<td>0</td>
<td>9.09 GB/s</td>
</tr>
<tr>
<td>Stanza</td>
<td>0</td>
<td>0</td>
<td>24 bytes</td>
<td>36.6 GB/s</td>
</tr>
<tr>
<td>Stream</td>
<td>8 bytes</td>
<td>32 bytes</td>
<td>0</td>
<td>106 GB/s</td>
</tr>
</tbody>
</table>
or the CombBLAS limit) and the weighted bandwidth limit (in black) and look for the minimum.
We observe a pure Python KDT filter will result in a performance bound more than an order of magnitude lower than the bandwidth limit. Conversely, the bandwidth limit is about 25× lower than the CombBLAS in-core performance limit. Ultimately, the performance of a SEJITS specialized filter is sufficiently fast to ensure a BFS implementation will be bandwidth-bound. This is a very important observation that explains why SEJITS+KDT performance is so close to CombBLAS performance in practice (as shown later in Section 7) even though its in-core performance is 4× slower.
7. EXPERIMENTS
7.1 Methodology
To evaluate our methodology, we examine graph analysis behavior on an Mirasol, an Intel Nehalem-based machine, as well as the Hopper Cray XE6 supercomputer. Mirasol is a single node platform composed of four Intel Xeon E7-8870 processors. Each socket has ten cores running at 2.4 GHz, and supports two-way simultaneous multithreading (20 thread contexts per socket). The cores are connected to a very large 30 MB L3 cache via a ring architecture. The sustained stream bandwidth is about 30 GB/s per socket. The machine has 256 GB 1067 MHz DDR3 RAM. We use OpenMPI 1.4.3 with GCC C++ compiler version 4.6.2 and Python 2.6.6.
Hopper is a Cray XE6 massively parallel processing (MPP) system, built from dual-socket 12-core “Magny-Cours” Opteron compute nodes. In reality, each socket (multichip module) has two dual hex-core chips, and so a node can be viewed as a four-chip compute configuration with strong NUMA properties. Each Opteron chip contains six superscalar, out-of-order cores capable of completing one (dual-slot) SIMD add and one SIMD multiply per cycle. Additionally, each core has private 64 KB L1 and 512 KB low-latency L2 caches. The six cores on a chip share a 6MB L3 cache and dual DDR3-1333 memory controllers capable of providing an average STREAM [22] bandwidth of 12GB/s per chip. Each pair of compute nodes shares one Gemini network chip, which collectively form a 3D torus. We use Cray’s MPI implementation, which is based on MPICH2, and compile our code with GCC C++ compiler version 4.6.2 and Python 2.7. Complicating our experiments, some compute nodes do not contain a compiler; we ensured that a compute node with compilers available was used to build the SEJITS+KDT filters.
7.2 Test data sets
For most of our parallel scaling studies, we use synthetically-generated R-MAT [18] graphs with a very skewed degree distribution. An R-MAT graph of scale \( N \) has \( 2^N \) vertices and approximately \( \text{edgefactor} \times 2^N \) edges. In our tests, our edgefactor is 56, and our R-MAT seed parameters \( a, b, c \), and \( d \) are 0.59, 0.19, 0.19, 0.05 respectively. After generating this non-semantic (boolean) graph, the edge payloads are artificially introduced using a random number generator in a way that ensures target filter permeability. The edge type is the same as the Twitter edge type described below, to be consistent between experiments on real and synthetic data.
We also use graphs from real social network interactions, from anonymized Twitter data. In our Twitter graphs, edges can represent two different types of interactions. The first
interaction is the “following” relationship where an edge from \( v_i \) to \( v_j \) means that \( v_i \) is following \( v_j \) (note that these directions are consistent with the common authority-hub definitions in the World Wide Web). The second interaction encodes an abbreviated “retweet” relationship: an edge from \( v_i \) to \( v_j \) means that \( v_i \) has mentioned \( v_j \) at least once in their tweets. The edge also keeps the number of such tweets (count) as well as the last tweet date if count is larger than one.
The tweets occurred in the period of June-December of 2009. To allow scaling studies, we creates subsets of these tweets, based on the date they occur. The small dataset contains tweets from the first two weeks of June, the medium dataset contains tweets that happened in June and July, the large dataset contains tweets dated June-September, and finally the huge dataset contains all the tweets from June to December.
These partial tweets are then induced upon the graph that represents the follower/followee relationship. If a person tweeted someone or has been tweeted by someone, then the vertex is retained in the tweet-induced combined graph.
**Table 4: Sizes (vertex and edge counts) of different combined twitter graphs.**
<table>
<thead>
<tr>
<th>Label</th>
<th>Vertices (millions)</th>
<th>Tweet</th>
<th>Follow</th>
<th>Tweet&follow</th>
</tr>
</thead>
<tbody>
<tr>
<td>Small</td>
<td>0.5</td>
<td>0.7</td>
<td>65.3</td>
<td>0.3</td>
</tr>
<tr>
<td>Medium</td>
<td>4.2</td>
<td>14.2</td>
<td>386.5</td>
<td>4.8</td>
</tr>
<tr>
<td>Large</td>
<td>11.3</td>
<td>59.7</td>
<td>589.1</td>
<td>12.5</td>
</tr>
<tr>
<td>Huge</td>
<td>16.8</td>
<td>102.4</td>
<td>634.2</td>
<td>15.6</td>
</tr>
</tbody>
</table>
**Figure 11:** The edge data structure used for the tweet-induced combined (tweeting+following) graph in C++ (methods are omitted for brevity)
**Figure 12:** Relative breadth-first search performance of four methods. y-axis uses a log scale. The experiments are run using 24 nodes of Hopper, where each node has two 12-code AMD processors.
**Figure 13:** Relative filtered breadth-first search performance of four methods on real Twitter data. The y-axis is in seconds on a log scale. The runs use 36 cores (4 sockets) of Intel Xeon E7-8870 processors.
### 7.3 Experimental results
**Synthetic data set:** Figure 12 shows the relative distributed-memory performance of four methods in performing breadth-first search on a graph with 32 million vertices and 512 million edges, with varying filter selectivity. The structure of the input graph is an R-MAT of scale 25, and the edges are artificially introduced so that the specified percentage of edges pass the filter. These experiments are run on Hopper using 576 MPI processes with one MPI process per core. A similar figure (Figure 2) for Mirasol exists in the introduction. The SEJITS+KDT implementation closely tracks ComBLAS performance, except for the 100% filter; the performance hit here is mostly due to anomalous
Performance variability on the test machine.
**Twitter data set:** The filter used in the experiments with the Twitter data set is to keep edges whose latest retweeting interaction happened by June 30, 2009, and is explained in detail in Section 1.3. Figure 13 shows the relative performance of four systems in performing breadth-first search on real graphs that represent the Twitter interaction data on Mirasol. Figure 14 shows the same graph Hopper using 576 MPI processes. SEJIT+KDT’s performance is identical to the performance of CombBLAS in these data sets, showing that for real-life inspired cases, our approach is as fast as the underlying high-performance library.
**Parallel scaling:** The parallel scaling of our approach is shown in Figure 15 for lower concurrencies on Mirasol. CombBLAS achieves remarkable linear scaling with increasing process counts (34-36X on 36 cores), while SEJIT+KDT closely tracks its performance and scaling. Single core KDT runs did not finish in a reasonable time to report. We do not report performance of materialized filters as they were previously shown to be the slowest.
Parallel scaling at higher concurrencies is done on Hopper, using the scale 25 synthetic R-MAT data set. Figure 16 shows the comparative performance of KDT on-the-fly filters, SEJIT+KDT, and CombBLAS, with 10% and 25% filter permeability. Finally, we show weak scaling results on Hopper using 1% filter permeability (other cases experienced similar performance). In this run, shown in Figure 17, each MPI process is responsible for approximately 11 million original edges (hence 22 million edges after symmetricization). More concretely, 121-concurrency runs are obtained on a scale 23 R-MAT graph, 576-concurrency runs are obtained on scale 25 R-MAT graph, and 2025-concurrency runs are obtained on scale 27 R-MAT graph (1 billion edges). KDT curve is mostly flat (only 9% deviation) due to its in-core computational bottlenecks, while SEJIT+KDT and CombBLAS shows higher deviations (54% and 62%, respectively) from the perfect flat line. However, these deviations are expected on a large scale BFS run and experienced on similar architectures [6].
8. CONCLUSION
The KDT graph analytics system achieves customizability through user-defined filters, high performance through the use of a scalable parallel library, and conceptual simplicity through appropriate graph abstractions expressed in a high-level language.
We have shown that the performance hit of expressing filters in a high-level language can be mitigated by Just-in-Time Specialization. In particular, we have shown that our embedded DSL for filters can enable Python code to achieve comparable performance to a pure C++ implementation. A roofline analysis shows that the specializer enables filtering to move from being compute-bound to memory bandwidth-bound. We demonstrated our approach on both real-world data and large generated datasets. Our approach scales to graphs on the order of hundreds of millions of edges, and machines with thousands of processors.
In future work we will further generalize our DSL to support a larger subset of Python, as well as expand SEJIT support beyond filtering to cover more KDT primitives. An open question is whether CombBLAS performance can be pushed closer to the bandwidth limit by eliminating internal data structure overheads.
**Acknowledgements**
This work was supported in part by National Science Foundation grant CNS-0709385. Portions of this work were performed at the UC Berkeley Parallel Computing Laboratory.
Figure 17: Parallel ‘weak scaling’ results of filtered BFS on Hopper, using 1% percent permeability. y-axis is in log scale, time is in seconds.
(Par Lab), supported by DARPA (contract #FA8750-10-1-0191) and by the Universal Parallel Computing Research Centers (UPCRC) awards from Microsoft Corp. (Award #024263) and Intel Corp. (Award #024894), with matching funds from the UC Discovery Grant (#DIG07-10227) and additional support from Par Lab affiliates National Instruments, NEC, Nokia, NVIDIA, Oracle, and Samsung. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05-CH-11231. Authors from Lawrence Berkeley National Laboratory were supported by the DOE Office of Advanced Scientific Computing Research under contract number DE-AC02-05-CH-11231. Authors from Lawrence Berkeley National Laboratory were supported by the DOE Office of Advanced Scientific Computing Research under contract number DE-AC02-05-CH-11231.
9. REFERENCES
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a561689.pdf", "len_cl100k_base": 9533, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 44502, "total-output-tokens": 11684, "length": "2e13", "weborganizer": {"__label__adult": 0.00045418739318847656, "__label__art_design": 0.0005860328674316406, "__label__crime_law": 0.0005125999450683594, "__label__education_jobs": 0.0008358955383300781, "__label__entertainment": 0.00017976760864257812, "__label__fashion_beauty": 0.00024068355560302737, "__label__finance_business": 0.00034117698669433594, "__label__food_dining": 0.0004730224609375, "__label__games": 0.0007824897766113281, "__label__hardware": 0.0019254684448242188, "__label__health": 0.0008096694946289062, "__label__history": 0.0005154609680175781, "__label__home_hobbies": 0.0001341104507446289, "__label__industrial": 0.0007038116455078125, "__label__literature": 0.0004811286926269531, "__label__politics": 0.0004417896270751953, "__label__religion": 0.0007371902465820312, "__label__science_tech": 0.253662109375, "__label__social_life": 0.00014150142669677734, "__label__software": 0.0120697021484375, "__label__software_dev": 0.72265625, "__label__sports_fitness": 0.0004172325134277344, "__label__transportation": 0.0008025169372558594, "__label__travel": 0.0002510547637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48246, 0.03286]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48246, 0.35743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48246, 0.8824]], "google_gemma-3-12b-it_contains_pii": [[0, 357, false], [357, 2170, null], [2170, 2170, null], [2170, 2170, null], [2170, 4804, null], [4804, 9683, null], [9683, 16028, null], [16028, 20945, null], [20945, 25157, null], [25157, 31515, null], [31515, 35975, null], [35975, 38931, null], [38931, 42489, null], [42489, 47541, null], [47541, 48246, null]], "google_gemma-3-12b-it_is_public_document": [[0, 357, true], [357, 2170, null], [2170, 2170, null], [2170, 2170, null], [2170, 4804, null], [4804, 9683, null], [9683, 16028, null], [16028, 20945, null], [20945, 25157, null], [25157, 31515, null], [31515, 35975, null], [35975, 38931, null], [38931, 42489, null], [42489, 47541, null], [47541, 48246, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48246, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48246, null]], "pdf_page_numbers": [[0, 357, 1], [357, 2170, 2], [2170, 2170, 3], [2170, 2170, 4], [2170, 4804, 5], [4804, 9683, 6], [9683, 16028, 7], [16028, 20945, 8], [20945, 25157, 9], [25157, 31515, 10], [31515, 35975, 11], [35975, 38931, 12], [38931, 42489, 13], [42489, 47541, 14], [47541, 48246, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48246, 0.09649]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
7b2ddde6c3ed7a3bda8a2ed3cbe33f9fa9085e9e
|
[REMOVED]
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/239958957/Conf2020.pdf", "len_cl100k_base": 13173, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 65491, "total-output-tokens": 16622, "length": "2e13", "weborganizer": {"__label__adult": 0.00054168701171875, "__label__art_design": 0.000545501708984375, "__label__crime_law": 0.0007333755493164062, "__label__education_jobs": 0.000835418701171875, "__label__entertainment": 0.0001055598258972168, "__label__fashion_beauty": 0.0002346038818359375, "__label__finance_business": 0.0022182464599609375, "__label__food_dining": 0.0005369186401367188, "__label__games": 0.0007658004760742188, "__label__hardware": 0.000973224639892578, "__label__health": 0.0009217262268066406, "__label__history": 0.00033664703369140625, "__label__home_hobbies": 0.00016570091247558594, "__label__industrial": 0.0008020401000976562, "__label__literature": 0.00045943260192871094, "__label__politics": 0.0005526542663574219, "__label__religion": 0.0005164146423339844, "__label__science_tech": 0.06292724609375, "__label__social_life": 0.00011640787124633788, "__label__software": 0.006633758544921875, "__label__software_dev": 0.91748046875, "__label__sports_fitness": 0.0003306865692138672, "__label__transportation": 0.0008444786071777344, "__label__travel": 0.00024509429931640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58774, 0.02388]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58774, 0.40173]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58774, 0.83095]], "google_gemma-3-12b-it_contains_pii": [[0, 534, false], [534, 3588, null], [3588, 6740, null], [6740, 10116, null], [10116, 13054, null], [13054, 16467, null], [16467, 19198, null], [19198, 22947, null], [22947, 26481, null], [26481, 29672, null], [29672, 32893, null], [32893, 36169, null], [36169, 39072, null], [39072, 41624, null], [41624, 44350, null], [44350, 48090, null], [48090, 52045, null], [52045, 55378, null], [55378, 58774, null]], "google_gemma-3-12b-it_is_public_document": [[0, 534, true], [534, 3588, null], [3588, 6740, null], [6740, 10116, null], [10116, 13054, null], [13054, 16467, null], [16467, 19198, null], [19198, 22947, null], [22947, 26481, null], [26481, 29672, null], [29672, 32893, null], [32893, 36169, null], [36169, 39072, null], [39072, 41624, null], [41624, 44350, null], [44350, 48090, null], [48090, 52045, null], [52045, 55378, null], [55378, 58774, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58774, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58774, null]], "pdf_page_numbers": [[0, 534, 1], [534, 3588, 2], [3588, 6740, 3], [6740, 10116, 4], [10116, 13054, 5], [13054, 16467, 6], [16467, 19198, 7], [19198, 22947, 8], [22947, 26481, 9], [26481, 29672, 10], [29672, 32893, 11], [32893, 36169, 12], [36169, 39072, 13], [39072, 41624, 14], [41624, 44350, 15], [44350, 48090, 16], [48090, 52045, 17], [52045, 55378, 18], [55378, 58774, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58774, 0.01937]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
a523a30540d85c842181400d859e3597895c3faf
|
Eclipse Committer Bootcamp
Copyright © 2013, 2014 The Eclipse Foundation. Made available under the terms of the EPL
Eclipse Committer Bootcamp
Wayne Beaton - Open Source Projects
emo@eclipse.org
Sharon Corbett - Intellectual Property
emo-ip-team@eclipse.org
Denis Roy - Information Technology
webmaster@eclipse.org
# Eclipse Foundation Staff at EclipseCon
<table>
<thead>
<tr>
<th>Time</th>
<th>Tuesday</th>
<th>Wednesday</th>
<th>Thursday</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Morning</strong></td>
<td>Andrew Membership, Working Groups</td>
<td>Matt IT</td>
<td>Sharon</td>
</tr>
<tr>
<td>Break</td>
<td>Wayne Projects, Evangelism</td>
<td>Gaël Membership, Working Groups</td>
<td>Intellectual Property</td>
</tr>
<tr>
<td>10:00 - 10:30</td>
<td></td>
<td></td>
<td>Ralph</td>
</tr>
<tr>
<td></td>
<td><strong>Wayne</strong></td>
<td></td>
<td>Membership, Working Groups</td>
</tr>
<tr>
<td><strong>Lunch 1</strong></td>
<td>Andrew Membership, Working Groups</td>
<td>Gaël Membership, Working Groups</td>
<td>Denis</td>
</tr>
<tr>
<td>11:50 - 12:40</td>
<td>Benjamin IoT, Evangelism</td>
<td></td>
<td>IT</td>
</tr>
<tr>
<td></td>
<td><strong>Ralph</strong></td>
<td></td>
<td>Chris G</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>IT</td>
</tr>
<tr>
<td><strong>Lunch 2</strong></td>
<td>Sharon Intellectual Property</td>
<td>Sharon Intellectual Property</td>
<td>Sharon</td>
</tr>
<tr>
<td>12:40 - 13:30</td>
<td>Andrew Membership, Working Groups</td>
<td>Matt IT</td>
<td>Intellectual Property</td>
</tr>
<tr>
<td></td>
<td><strong>Wayne</strong></td>
<td>Gaël Membership, Working Groups</td>
<td>Wayne</td>
</tr>
<tr>
<td></td>
<td>Projects, Evangelism</td>
<td></td>
<td>Projects, Evangelism</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Benjamin</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>IoT, Evangelism</td>
</tr>
<tr>
<td><strong>Afternoon</strong></td>
<td>Sharon Intellectual Property</td>
<td>Gaël Membership, Working Groups</td>
<td><strong>Sharon</strong></td>
</tr>
<tr>
<td>Break</td>
<td></td>
<td></td>
<td>Intellectual Property</td>
</tr>
<tr>
<td>15:35 - 16:15</td>
<td><strong>Sharon</strong></td>
<td>Matt IT</td>
<td><strong>Wayne</strong></td>
</tr>
<tr>
<td></td>
<td></td>
<td>Gaël Membership, Working Groups</td>
<td>Projects, Evangelism</td>
</tr>
<tr>
<td></td>
<td><strong>Denis</strong></td>
<td>Sharon Intellectual Property</td>
<td><strong>Benjamin</strong></td>
</tr>
<tr>
<td></td>
<td>IT</td>
<td>Matt IT</td>
<td>IoT, Evangelism</td>
</tr>
<tr>
<td></td>
<td><strong>Sharon</strong></td>
<td>Gaël Membership, Working Groups</td>
<td><strong>Sharon</strong></td>
</tr>
<tr>
<td></td>
<td></td>
<td><strong>Sharon</strong></td>
<td>Intellectual Property</td>
</tr>
</tbody>
</table>
Eclipse Committer Bootcamp
Part I: Exploiting the Eclipse Development Process for Fun and Profit
http://eclipse.org/projects/dev_process
Agenda
• Open source rules of engagement
• Projects, Code, and Resources
• Who's Who
• Project Management Infrastructure
• Quiz
The Eclipse Development Process
- Open source rules of engagement
- Governance, structure, definitions, reviews
- General framework for projects
- Day-by-day development rules/process is defined by the project
Open Source Rules of Engagement
- Transparent
- Open
- Meritocracy
Transparent: Invite Participation
- Project discussions, minutes, deliberations, project plans, plans for new features, and other artifacts are open, public, and easily accessible
- Use “dev” list for project-related discussion
- Capture all work in Bugzilla records
Open: Accept Participation
- The same opportunity to all
- Everyone participates with the same rules
- There are no rules to exclude any potential contributors
- Including direct competitors in the marketplace
Meritocracy: Earn your Way in
- The more you contribute the more responsibility you will earn
- Leadership roles in Eclipse are also merit-based and earned by peer acclaim
Three Communities
• Users
- Users are, well... users
• Adopters
- Individuals, groups, organizations
- Build products, extensions, based on your project
• Developers
- Contributors, committers
Provisioning
- Submit provisioning request
- IP Team handles committer paperwork
- Webmaster team provisions project resources
Projects, Committers, and Resources
Some Sharing
- A parent project may share:
- Builds, Downloads
- Website, mailing lists, and forums
- May not share:
- Committers
- Repositories, Bugzilla
Leadership Chain
Board of Directors
EMO(ED)
Project Management Committee
Project Leads
Project Members
• Project Management Committee (PMC)
– Oversight, IP process, various approvals
• Project Lead(s)
– Leadership. Duh.
• Committers
– Eclipse IP Due Diligence Process
– Eclipse Development Process
Eclipse Management Organization (EMO)
- Eclipse Foundation Staff
- Architecture Council
- Planning Council
- EMO (ED): Executive Director
- Email: emo@eclipse.org
Architecture Council
- Stewards of the Eclipse Development Process
- Architectural oversight
- Best practices
- Mentors for new Eclipse projects
- Use your mentors!
Planning Council
- Simultaneous Release
- Cross-project planning
- Architectural issues
- User interface conflicts
- Other coordination and integration issues
Project Metadata
- Project id
- e.g. technology.egit, soa.winery, eclipse.jdt.ui
- Description, scope, logo, technology type
- Releases
- Relationships to other projects
- Build technologies
Project Metadata: The PMI
http://projects.eclipse.org/projects/<projectId>
Project Metadata
- Description
- Present tense
- “elevator pitch”
- Scope
- Logo
- Categorization
- Links
- More...
PMI: Categorization
**CATEGORIZATION, RELATIONSHIPS, AND TAGS**
**Technology Types**
- [ ] Language
- [ ] Machine-to-Machine
- [ ] Modeling
- [ ] OSGi
- [ ] Runtime
- [ ] Testing
- [x] Tools
Select the types of technology *produced* by the project.
**RELATED PROJECTS**
Specify any projects that are related to this project. What "related" means is really up to you. The values you specify here will rendered on the project page as hyperlinks.
[Add another item]
PMI: Source Code
- Contribution Message
- Bugzilla
- Source Repositories
PMI: Build
- Description
- Build Technologies
- Documentation
- Links
PMI: Downloads
- “Big Button” URL
- Message
- Marketplace
- Update sites
- Downloads
PMI: Releases and Reviews
- Releases and reviews have their own records
Defining a New Release
Create a new release. **Note that a review is required for all major and minor releases.** Please review the release cycle documentation.
**Release date**
- Mar 12 2014
**Name**
The release name must contain major and minor version numbers, and may contain a service number and patch information; e.g. "5.6 (Kepler)", "1.0.1".
Create
Create and edit
Eclipse SCADA committers
The following commands are available to project committers:
**Contributor Agreement**
- Validate Contributor CLA
**Intellectual Property**
- Create a Contribution Questionnaire
- Generate IP Log (project)
- Review downloads
**Communication**
- PMC Mailing list
- Send Email to the PMC...
- Send Email to the Dev List...
**Releases**
- Create a new release
Release Metadata
• Description
– Present tense
– “Elevator Pitch”
• Release Date
• Release Type (major, minor, service)
Theme Items
Name
1.0.0
Description
Implement and stabilize a first version of the described features including a defined API.
Committed
https://bugs.eclipse.org/bugs/buglist.cgi?list_id=5533390&classification=Modeling&query_format=advanced&bug_status=REOPENED&bug...
A Bugzilla search URL that identifies the committed items for this theme in this release.
Themes:
1.0.0
Implement and stabilize a first version of the described features including a defined API.
Committed Items
Many ComposedAdapterFactory instantiations without disposal [368340] (target milestone: 1.0.0)
Update Site is missing dependencies to features [381403] (target milestone: 1.0.0M4)
Model ECPPort and ECPRepository with EMF [379562] (target milestone: 1.0.0M1)
[ECP2] use context.getEditingDomain() instead ofAdapterFactoryEditingDomain.getEditingDomainFor() [381128] (target milestone: 1.0.0M2)
Modularize ECP for (better) reuse in other containers other than 3.x editors [382328] (target milestone: 1.0.0M1)
ECP should be runnable in a non-cdo and/or non-emfstore context [382365] (target milestone: 1.0.0M1)
Performance optimization of model element deletion [382516] (target milestone: 1.0.0M3)
Project Plans
BPMN2 Modeler Project 1.0 Plan
1.0
Description:
The BPMN2 Modeler is a graphical modeling tool which allows creation and editing of BPMN 2.0 spec compliant diagrams. The tool is built on Eclipse Graphiti and uses the MDT BPMN2 project as the underlying model. This release represents the first stable version of the editor.
Version 1.0, while still not a final, polished product, is very stable and offers a very complete API that achieves the goals set for this release. The project leadership would like to thank the university researchers and community users who helped define and refine the editor API (you know who you are 😊) and for making BPMN2 Modeler a better product.
Deliverables:
- Generic BPMN2 editor
- jBPMN extension plug-in
- Code samples and tutorials
Compatibility:
This, and all releases going forward, will only support Graphiti version 0.10.x and higher. If the Graphiti project releases a new version with breaking API changes, BPMN2 Modeler will be updated to support those new versions of Graphiti.
A new extension point has been added to allow extension plug-ins to define custom extensions for BPMN 2.0 connection elements as well as shapes. See Bug 415769 for details.
The class hierarchy for Custom Tasks has been refactored to allow extension plug-ins to define custom extensions for BPMN 2.0. See Bug 415769 for details.
Also see the New & Noteworthy page for more information about compatibility issues.
Internationalization:
String localization for all UI messages will be addressed in the next service release scheduled for end of Q4, 2013.
Target Environments:
This release requires Java 6 and is targeted for Kepler. Testing has been done on the following hardware/OS platforms:
- MS-Windows 7
- Fedora Linux 18
- MacOS X 10.8 Mountain Lion
<table>
<thead>
<tr>
<th>Name</th>
<th>Date</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>M1</td>
<td>2012/08/15</td>
<td>Initial Contribution</td>
</tr>
<tr>
<td>M2</td>
<td>2012/09/15</td>
<td>Milestone Build</td>
</tr>
<tr>
<td>RC1</td>
<td>2012/09/30</td>
<td>Release Candidate for 0.1.0</td>
</tr>
<tr>
<td>0.1.0</td>
<td>2012/10/15</td>
<td>Code Stabilization Release</td>
</tr>
</tbody>
</table>
Milestones
- Name, date, description
Themes
- Bugzilla URLs
Optional
- Deliverables, Compatibility, Target Environments, Internationalization
Description
- Paragraph, no-bullets preferred
Quiz
If you require assistance, you should contact:
(A) Your project mentors
(B) Your PMC
(C) emo@eclipse.org
(D) 411
Quiz
The open source rules of engagement include:
(A) Governance, Releases, and Reviews
(B) Openness, Transparency, and Morrissey
(C) Participation, Contribution, and Openness
(D) Openness, Transparency, and Meritocracy
Quiz
For an Eclipse project “open” means:
(A) Operational (open for business)
(B) Level playing field (open opportunity)
(C) Visibility (open book)
(D) Source code is available for free download
Links and Stuff
• Cross Project Issues Dev mailing list
– https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
• Project-specific mailing lists
– https://dev.eclipse.org/mailman/listinfo/<short-name>-dev
– https://dev.eclipse.org/mailman/listinfo/<pmc-short-name>-pmc
• Development Resources
• The Eclipse Development Process
– http://eclipse.org/projects/dev_process
• Project Metadata
– https://wiki.eclipse.org/Project_Management_Infrastructure/Project_Metadata
• The Project Management Infrastructure
– http://wiki.eclipse.org/Project_Management_Infrastructure
• Starting a Project
Part II
Sharon Corbett
Intellectual Property Management
AGENDA
• Intellectual Property Overview
• Due Diligence Process Poster
• Contribution Types
• Applicable Project Licenses
• Arranging an Initial Project CQ
• IPzilla
• Quiz
INTELLECTUAL PROPERTY
(IP) Refers to creations of the mind...Idea, Invention, Process, etc..
Meet the Team
Janet Campbell
- Director of Intellectual Property
- Legal Counsel
Sharon Corbett
- Intellectual Property Management
Pablo Jacobovich
- Intellectual Property Analyst
A Brief Explanation of the Eclipse IP Policy
Benefits of the Eclipse IP Management
Who has a Role to Play?
Everyone has a Role to Play!
- Committers
- PMC
- Project
- The Eclipse Foundation
Due Diligence Process
Figure 1, 2, & 3
Figure 1
Written 100% by Submitting Committer or Committee on same Project under the supervision of the PMC
(See Definitions on Page 3)
Figure 2
Written 100% by employees of the same employer as the Submitting Committer: (a) under the supervision of the PMC; and (b) where the employer has signed a Member Committee Agreement.
(See Definitions on Page 3)
Figure 3
Written 100% by Submitting Contributor (Non-Committer) and Submitted under the terms of the Project License (typically EPL)
(See Definitions on Page 3)
Figure 7
Confirm that the Contribution:
1. Contains No Cryptography
2. Is Developed from Scratch (without incorporating content from elsewhere) – Contact the EMO if you aren’t sure.
3. Is 100% Project License Code
Figure 9
Confirm that the Contribution:
1. Contains No Cryptography
2. Is Developed from Scratch (without incorporating content from elsewhere) – Contact the EMO if you aren’t sure.
3. Is 100% Project License Code
Includes all content, such as XML, DTDs, fonts, images, logos, trademarks.
Figure 8
Definitions:
“Project License” — your default Project license will be the EPL. Any other licensing strategy requires a unanimous vote of the Eclipse Board of Directors.
“Non-Eclipse Content” — any code maintained on servers other than those of the Eclipse Foundation.
“Under Supervision of the PMC” — refers to general supervision; sufficient to ensure the code being submitted is in line with the goals of the project from a technical standpoint. This level of supervision may vary by project. Determination is to be made by the relevant PMC of the project.
“Submitting Committee” — An Eclipse committee on the project at the time of development. Code developed prior to becoming an Eclipse committee requires due diligence review.
Third Party Dependencies:
Does your project work with or depend on other third party content?
Please consult the Eclipse Third Party Dependency Policy
Moving Code to Eclipse:
Interested in moving code from somewhere else to Eclipse and maintaining it at Eclipse?
Contact pmo@eclipse.org
Distributing Eclipse Projects, Plug-Ins & Bundles — Guidelines:
Release Candidate Distributions must not contain Non-Release Candidate (e.g., not “RC1” or final release “1.0”) distributions from other Eclipse Projects as such releases may contain non-reviewed and approved content.
Release Candidate Distributions may pre-req Non-Release Candidate (e.g., not “RC1” or final release “1.0”) distributions from other Eclipse Projects provided the downstream consumer is made aware that the content that is being pre-req’d may contain non-reviewed and approved content.
Non-Release Candidate Distributions may contain Non-Release Candidate (e.g., not “RC1” or final release “1.0”) distributions from other Eclipse Projects.
Non-Release Candidate Distributions may pre-req Non-Release Candidate (e.g., not “RC1” or final release “1.0”) distributions from other Eclipse Projects.
Simultaneous Release: All Projects participating in the Release Candidate for the simultaneous release must be Release Candidate themselves. The above guidelines apply to any Project wishing to pre-req or incorporate other Eclipse Projects.
Contribution Questionnaires (CQs)
**Project Code**
- Hosted/Maintained @Eclipse
- Project Licensed
**Other**
- Not Hosted/Not Maintained @Eclipse
- Various License(s)
Eclipse Project License
EPL (Typical)
Dual Licensed Scenarios are Possible
Non-Code, Example, and Other Content: EPL, CCSA 2.0, CC 3.0 (Unported)
Types of Project Licensed CQs
- Initial to Kick off a Project
- Community Contributions
- Authored by other Committers
- Moving to Eclipse
Arranging a Project Licensed CQ
- Origin
- Rights
- Cryptography
- Copyright/License Headers
- Link to Contribution
- Attach Source (Zip)
- About, License & Notice Files
Submit a CQ
Create a CQ
Please make sure that you are familiar with the Eclipse Legal Process Poster before continuing with this questionnaire.
Specify the type of contribution:
- Contribution of code to be maintained by an Eclipse Foundation project
- Third-Party Code Request
Select the “Third-Party” option if you wish to use, re-use, reference or distribute third-party code that is maintained elsewhere. Note: this includes EPL code that is not maintained at eclipse.org.
Continue
Welcome to IPZILLA
Quiz
What is the typical project license for Eclipse projects?
1. Eclipse Distribution License
2. Eclipse Public License
3. Creative Commons Attribution 3.0 Unported
4. Apache 2.0 License
The typical project license for Eclipse projects is...
A. Eclipse Distribution License
B. Eclipse Public License
C. Creative Commons Attribution 3.0 Unported
D. Apache 2.0 License
Quiz
What are the benefits of the Eclipse IP process?
A. Provide a reduced risk of copyright infringement
B. Provide a reduced risk of litigation
C. Provide developer freedom
D. Help committers write awesome code
Answer
The benefits of the Eclipse IP process are…
A. Provide a reduced risk of copyright infringement
B. Provide a reduced risk of litigation
C. Provide developer freedom
D. Help committers write awesome code
End Part 1
Eclipse Committer Bootcamp
Part III: Provisioning & Server Resources
http://eclip.se/q
Agenda
- The team
- Server infrastructure overview
- You Eclipse Foundation account, committer ID
- Project provisioning process
- Committing your initial contribution
- Interacting with users and other developers
- Asking for help
- Quiz
Your Webmaster Team
- Matt Ward – Server Samurai
- Thanh Ha – Build Guru / Git Ninja
- Denis Roy – Just Some Guy
- Web Developers: Chris Guindon & Edouard Poitras
webmaster@eclipse.org
Server Infrastructure
- 3 Cabinets in Ottawa, Canada
- 60 kW redundant AC power
- 1 Gbps backends
- 1 Gbps BGP-4 bandwidth (rate limited)
- 45 TB/month
- 45M web pages/month (www & wiki)
- Download servers: 9M files/day (14M hits)
- ~60 download mirrors worldwide
- 99.995% service availability
Source Code: Git, Gerrit
index: eclipse.platform.git
Bug 419503 - Dirty working tree: about mappings
Signed-off-by: Thanh Ha <thanh.ha@eclipse.org>
Diffstat (more/less context) (ignore whitespace changes)
-rw-r-- platform/org.eclipse.platform/about.mappings 2
-rw-r-- platform/org.eclipse.platform/pom.xml 25
http://git.eclipse.org
Gerrit Code Review
- Any contributor can push to Gerrit repository
- Review/vote before merging with master
- Committer votes
- Hudson “votes”
http://git.eclipse.org/r
Contributions and Community
- Contributions come in through Bugzilla or Gerrit
- CLA (Contributor License Agreement)
- Everyone must sign-off!
CLA
• http://projects.eclipse.org
Issue Tracker: Bugzilla
Bug 401288 - Require possibility to specify workspace (data) directory of tests to run. (edit)
Status: RESOLVED FIXED (edit)
Product: Tycho
Component: Core
Version: 0.16.0
Hardware: All
Importance: P3
Target Milestone: 0.17.0
Assigned To: Jan Sievers (edit) (take)
QA Contact: (edit) (take)
Reported: 2013-02-20 05:16 EST by Johann Draschwandtner — CLA
Modified: 2013-05-03 08:26 EDT (History)
CC List: Add me to CC list 4 users (edit)
See Also: (add)
Attachments
proposed fix (1.34 KB, patch)
2012-07-27 13:06 EDT, Pedro Larios — CLA
Add an attachment (proposed patch, testcase, etc.) View All
Additional Comments:
Status: NEW
Mark as Duplicate
Pedro Larios — CLA 2012-07-27 13:00:26 EDT
Build Identifier: 2.0.4
SWTBotTable.columnCount returns 0 when there are no columns, or when a TableColumn has not been explicitly created. This causes the selection method to return a TableRow item with no text even though the selection is valid in the table widget.
Reproducible: Always
## User Community: Forums
<table>
<thead>
<tr>
<th>Title</th>
<th>By:</th>
<th>Views</th>
<th>Created</th>
<th>By:</th>
</tr>
</thead>
<tbody>
<tr>
<td>How Install Spring in Eclipse 3.7</td>
<td>Missing name Missing name</td>
<td>2</td>
<td>Tue, 08 October 2013</td>
<td>Missing name Missing name</td>
</tr>
<tr>
<td>Installed the standard version of eclipse, Do I have to install jre else?</td>
<td>Zhg Mising name</td>
<td>11</td>
<td>Tue, 08 October 2013</td>
<td>Zhg Mising name</td>
</tr>
<tr>
<td>What is different between Standard version and IDE for developer?</td>
<td>Zhg Mising name</td>
<td>15</td>
<td>Tue, 08 October 2013</td>
<td>Zhg Mising name</td>
</tr>
<tr>
<td>Eclipse + Blackberry + Phonegap project</td>
<td>Paul Kilroy</td>
<td>0</td>
<td>Mon, 07 October 2013</td>
<td>Paul Kilroy</td>
</tr>
<tr>
<td>wiki.eclipse.org/Eclipse.ini</td>
<td>Russell Bateman</td>
<td>2</td>
<td>Mon, 07 October 2013</td>
<td>Russell Bateman</td>
</tr>
</tbody>
</table>
---
<table>
<thead>
<tr>
<th>Topic</th>
<th>Re: Suitable javascript autocomple</th>
<th>ray zhang</th>
<th>10/09/2013 01:55</th>
</tr>
</thead>
<tbody>
<tr>
<td>Auto Numbering</td>
<td></td>
<td>awdesh parihar</td>
<td>05:10 AM</td>
</tr>
<tr>
<td>Jetty support on WinCE</td>
<td></td>
<td>Jay Bhatt</td>
<td>08:27 AM</td>
</tr>
<tr>
<td>Need help getting started with dandelion pl...</td>
<td></td>
<td>Paul Bowyer</td>
<td>08:27 AM</td>
</tr>
<tr>
<td>Cannot Find the Download link</td>
<td></td>
<td>Jayant Rajpurohit</td>
<td>08:44 AM</td>
</tr>
<tr>
<td>Eclipse + OpenKM</td>
<td></td>
<td>Ralph Laskowski</td>
<td>09:24 AM</td>
</tr>
<tr>
<td>Re: Cannot Find the Download link</td>
<td></td>
<td>Denis Roy</td>
<td>09:39 AM</td>
</tr>
<tr>
<td>Eclipse Suddenly Won't Start Up</td>
<td></td>
<td>Mike McGuire</td>
<td>11:57 AM</td>
</tr>
</tbody>
</table>
Mailing list: jubula-dev
Jubula platform and tools development
About jubula-dev
Jubula platform and tools development
Using jubula-dev
To post a message to all the list members, send email to jubula-dev@eclipse.org. You must be subscribed to the list before this list, visit the jubula-dev Archives or subscribe to this list's RSS feed.
Subscribing to jubula-dev
All contributions you make to our web site are governed by our Terms Of Use. Your interactions with the Eclipse Foundation provide us about yourself are governed by our Privacy Policy.
Subscribe to jubula-dev by filling out the following form. You will be sent email requesting confirmation, to prevent others from entering your email address. You may enter a privacy password below. This provides only mild security, but should prevent others from messing with subscription. Do not use a valuable password as it may be emailed back to you in cleartext.
Accounts
- Committer ID vs. Email ID
- Committer ID for SSH & Portal
- Email for everything else
- This is Open Source -- Email addresses are shown!
Project Provisioning process
- Project space: Git, www.eclipse.org, Bugzilla, Mailing Lists, Forum
- Committing IP-approved Initial Contribution to git.eclipse.org
- Culling history on Github-hosted repos
Project Website
Xtext
LANGUAGE DEVELOPMENT MADE EASY!
Building your own domain-specific languages has never been so easy. Just put your grammar in place and you not only get the working parser and linker but also first class Eclipse support.
Download
Interacting With Others
- Dev lists for committers (typically)
- Forums for user discussions (again, typically)
- Eclipse.org-committers mailing list
- Cross-project-issues-dev mailing list
- Bugzilla – Eclipse Foundation > Community
- webmaster@eclipse.org (Servers & Infra)
emo@eclipse.org (Project, community & process)
Quiz
Question 1: A contributor does NOT need a signed CLA when...
a) The Eclipse project uses Gerrit.
b) The Eclipse project uses Git.
c) Sending SPAM to wayne@eclipse.org
d) Working for a member company who has a signed membership agreement.
Quiz
Question 1: A contributor does NOT need a signed CLA when...
a) The Eclipse project uses Gerrit.
b) The Eclipse project uses Git.
c) Sending SPAM to wayne@eclipse.org
d) Working for a member company who has a signed membership agreement.
Quiz
Question 2: How can I help grow my user community?
a) Answer questions on the forums
b) Post FAQs on my project website
c) Make my project website inviting
d) Spend 100% of my time writing code
Question 2: How can I help grow my user community?
a) Answer questions on the forums
b) Post FAQs on my project website
c) Make my project website inviting
d) Spend 100% of my time writing code
Quiz
Question 3: When must I use my committer ID instead of my email address?
a) When logging on to the Forums
b) When writing an email
c) When being interrogated by airport security
d) When committing via SSH to git.eclipse.org
Quiz
Question 3: When must I use my committer ID instead of my email address?
a) When logging on to the Forums
b) When writing an email
c) When being interrogated by airport security
d) When committing via SSH to git.eclipse.org
Eclipse Committer Bootcamp
Part II: Managing Your Project
http://eclipse.org/projects/dev_process
Agenda
- Community development
- Elections
- Releases, plans, and reviews
- IP Logs
- Quiz
Writing code is fun, but...
- Open source rules of engagement
- Transparency, openness, meritocracy
- Have project-specific diversity goals
- Building diversity takes work
- Actively court contributors
- Be responsive when they do come
- “Kill with kindness”
Pragmatically Speaking...
- Keep project information up-to-date
- Project and release metadata, website, downloads
- Project code must be buildable
- Have a contribution guide
- “Getting started”
- CONTRIBUTING file in project repositories
- https://bugs.eclipse.org/397644
Contribution Guide
• Git, Gerrit, GitHub, ...
• Issue tracking (Bugzilla)
- “Help wanted” issues
• Project plan
• How to build
• How to engage (mailing lists, forums, Bugzilla)
Contribution Guide Generator
Contributing to Sapphire
Thanks for your interest in this project.
Project description:
Sapphire is a user interface development framework that improves productivity. Instead of focusing on individual widgets, layouts and data binding, the developers focus on modeling the semantics of the data and declaring the general intent of how the data it to be presented.
- https://projects.eclipse.org/projects/technology.sapphire
Developer resources:
Information regarding source code management, builds, coding standards, and more.
Contributor License Agreement:
Before your contribution can be accepted by the project, you need to create and electronically sign the Eclipse Foundation Contributor License Agreement (CLA).
Spend Time With the Community
Outreach
- Present at conferences
- Social media: Blog, tweet, ...
- Author papers, articles, ...
- You know your community best
- Where do they hang out?
- Help potential contributors find you
- Serving one community can build another
- e.g. A happy user community builds the adopter community; a large adopter community drives contribution
Meritocracy
- Nominees need to prove themselves
- How much merit is enough?
- Project-specific (work with your PMC)
- Tends to be qualitative, not quantitative
- Nomination criteria:
- Source code contributions
- Forum activity
- Subject matter expert
- ...
Project Lifecycle
- Declaration
- Proposal
- Creation Review
- Incubation
- Release Review
- Graduation Review
- Mature
- Release Review
- Elimination Review
- Archived
Releases
Capture Plan -> Implement -> Produce Milestone
Release
Release Review
Assemble Review Doc
PMC Review Documentation
-2 weeks
Assemble IP Log
IP Team Review IP Log
Start Release Review
-1 week
End Release Review
Publish Release
Release Review
• Major/minor releases
– Release review
– IP Log approval
– Plan to spend time planning/documenting release
• Service/Bugfix-only releases
– No review
– No IP Log approval
Release Naming
- `<major>`.<`minor`>.<`service`>
- e.g. 0.3, 1.2.4, ...
- Release
- e.g. 0.7
- Milestones
- Use the expected release name with M/RC
- e.g. 0.7M1, 0.7M2, 0.7M3, 0.7RC1, 0.7RC2
- Not for general public consumption
Incubation
• Releases conventionally use pre-1.0 names
• Incubation branding
– Incubation logo on their project home and primary download pages
– Downloads include the word "incubation" in the filename
• Not required for JAR files
– Bundle and feature names include the word "incubation"
• Not required for "Bundle-SymbolicName"s
• Incubation ends with a graduation review
– Generally combined with a release review
Other Reviews
• Graduation
– Generally combined with a release review
– Demonstrate committer familiarity with EDP/IP policy
– API stability, quality code
• Termination
– Lack of development resources, will, interest, ...
– Done?
The IP Log Generator
- Git/SVN
- Committers Activity (Dash)
- Repositories (PMI)
- Contributions (Dash)
- Licenses (Foundation DB)
- Contributions (Bugzilla)
IP Log Generator
IP Log
IP Log Review
Downloads Directory
Downloads Scanner
Technical Review
Legal Review
IP Log
Contributions REVIEW
The Download Scanner
- Linked from project page's “Committers” menu
- Validates third-party library use in project download directories
- Limited to Java/OSGi-based files
- Should be considered a guide
- Committers are responsible for following the Eclipse IP policy and process
- Don't count on this tool to get it exactly right
http://eclipse.org/projects/tools/downloads.php?id=<projectId>
Quiz
• A review is required for:
• (A) A bugfix releases
• (B) Committer elections
• (C) Producing milestone builds
• (D) Project creation
Quiz
To graduate, a project must:
(A) Have stable APIs
(B) Demonstrate knowledge of the EDP and IP Policy
(C) Invest in community outreach/development
(D) Have committer diversity
Quiz
Release plans are:
(A) Created at the beginning of a release cycle
(B) Modified throughout the release cycle
(C) Hastily assembled at the end of the release cycle
(D) Useless
Quiz
Release documentation:
(A) Must be approved
Links and Stuff (1/2)
- Eclipse Development Process
- http://www.eclipse.org/projects/dev_process
- Committer Due Diligence Guidelines
- Release Cycle
- Release Review
- Contribution Guide
- http://git.eclipse.org/c/egit/egit.git/plain/SUBMITTING_PATCHES
Links and Stuff (2/2)
- Incubation Branding
- Handling Git/Gerrit Contributions
- Download Scanner
- Bugzilla Contributions Review
- IP Logs
- IP Log Generator
The Eclipse Intellectual Property Process and You
Part IV
Agenda
• Using Third Party Libraries
• Dependency Categories
• Build & Test Workswith
• Parallel IP
• Review Stages
• Due Diligence Review
• IP Logs
• IP Best Practices
• Quiz
Contribution Questionnaires (CQs)
- **Project Code**
- Hosted/Maintained @Eclipse
- Project Licensed
- **Other**
- Not Hosted/Not Maintained @Eclipse
- Various License(s)
Approved Third Party Licenses
Apache Software License 1.1
Apache Software License 2.0
W3C Software License
Common Public License 1.0
IBM Public License 1.0
Mozilla Public License 1.1
Mozilla Public License 2.0
Common Development and Distribution License (CDDL) 1.0
GNU Free Documentation License 1.3
BSD
MIT
Example of Non Approved Third Party License
GNU General Public License (GPL)
Types of Project Licensed CQs
- Prerequisite
- Workswith
Arranging Third Party CQs
Has the package been previously approved for Eclipse distribution?
- Yes – Request Reuse
- No – Request a new CQ
Reuse - Piggyback
1. Orbit Bundle?
2. From Another Project?
- Verify Approved Attachment?
- Check for Subset Details
- No Requirement to Provide Source
- Immediate Approval
### Third Party CQs
<table>
<thead>
<tr>
<th>Origin</th>
<th>Contribution Mechanism</th>
<th>License(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project/Source URLs</td>
<td>Binary/Source</td>
<td>Modified/Unmodified</td>
</tr>
<tr>
<td>Attach Source (Zip)</td>
<td>No Nesting</td>
<td>Narrow Scope</td>
</tr>
</tbody>
</table>
About, License & Notice Files
Submit a CQ
Welcome, Denis Roy
Create a Contribution Questionnaire
Home ➔ Technology Project ➔ Babel
Please make sure that you are familiar with the Eclipse Legal Process Poster before continuing with this questionnaire.
Specify the type of contribution.
- Contribution of code to be maintained by an Eclipse Foundation project
- Third-Party Code Request.
Select the “Third-Party” option if you wish to use, re-use, reference or distribute third-party code that is maintained elsewhere. Note: this includes EPL code that is not maintained at eclipse.org
Continue
Reuse – PiggyBack CQ
Welcome, Denis Roy
Babel
Home » Technology Project » Babel
Create a Contribution Questionnaire for a Third Party Library
Enter the name of Library you wish to use. If we have a matching record in our IPZilla database, please select it for re-use. Otherwise enter the library name and version and choose 'continue'.
Tooltip Icons
✓ Approved ● Orbit Bundle
Name and Version of the Library:
antlr
✓ [CQ4865] ANTLR Runtime only: Version: 3.2 (ATO CQ3820)
✓ [CQ1921] ANTLR runtime Version: 3.0 (PB CQ1557)
✓ [CQ5988] ANTLR Runtime only: Version: 3.2 (PB Orbit CQ4865)
✓ [CQ5848] Jena 2.6.3 WITHOUT Version: Antlr (PB CQ4397)
✓ [CQ5828] ANTLR Runtime only: Version: 3.2 (PB Orbit CQ4865)
✓ [CQ5729] ANTLR Version: 3.0.1
✓ [CQ5706] ANTLR Runtime only: Version: 3.2 (PB Orbit CQ4865)
✓ [CQ4139] org.antlr Version: 3.0.1
✓ [CQ5622] ANTLR Runtime Version: 3.0 (PB Orbit CQ1921)
✓ [CQ5249] ANTLR Runtime only: Version 3.2 (PB Orbit CQ4865)
Third Party Dependency Policy
Build and Test Dependencies
• Non-distributable third party libraries and tools
• Required for build server only
• Grouped into single CQ, declared a “workswith” dependency if applicable
Parallel IP - Incubating
Is Project Conforming to Incubation?
Yes
Eligible for Parallel IP
No
CQ Remains In Queue
CQ Receives Checkin Based on Parallel IP
Parallel IP - Mature
Earlier Version of Package Approved?
- Yes
Is “Diff” between this version and previous version significant?
- Yes
- No
CQ Remains In Queue
CQ Receives Checkin Based on Mature Parallel IP
Due Diligence Review
Due Diligence Review
- Provenance (AKA Pedigree)
- License Compatibility
- Suitability
We’ve Done our Homework
<table>
<thead>
<tr>
<th>Review Complete</th>
<th>Areas of Concern Investigated</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Other open source projects contacted if required regarding investigation/resolutions</td>
</tr>
<tr>
<td></td>
<td>If approval is not possible, committer is contacted and advised of Foundation’s concerns and a technical workaround is investigated</td>
</tr>
</tbody>
</table>
Approved IPzilla CQ
Project IP Log
- Submission
- Technical Review
- IP Team Review
- Approval via email
YOU WANT YOUR CQ RESOLVED BY WHEN?
IP Best Practices
• Follow the DD Process
• Understand your code and what the project intends to distribute
• Scope/No Nesting
• Separate CQs (project licensed/third party)
• Third Party content requires approval to check in
• When in doubt, check in with us (emo-ip-team@eclipse.org)
Quiz
Which License is not approved for Eclipse distribution?
A. Apache 2.0
B. BSD
C. MIT
D. GPL
Which of the following licenses is not approved for Eclipse distribution?
A. Apache 2.0
B. BSD
C. MIT
D. GPL
Quiz
Can I check my third party jar into the repository without a CQ?
1. Yes
2. No
Content which requires due diligence review should not be checked in until the relevant CQ is provided a green light of checkin and/or full approval.
A. Yes
B. No
My project only uses Orbit Bundles for third party requirements. Orbit bundles are approved, do I still need to create a CQ?
A. Yes
B. No
Answer
Orbit Bundles are Approved for Eclipse distribution. However, each project is required to request reuse via its own CQ. Required for tracking purposes / forms part of the Project’s IP Log
A. Yes
B. No
Eclipse Legal Resources
- Legal - [www.eclipse.org/legal](http://www.eclipse.org/legal)
Getting In Touch
• IP Process Questions
emo-ip-team@eclipse.org
• License Questions
license@eclipse.org
• Committer Legal Agreements
emo-records@eclipse.org
THANK YOU!
Eclipse Committer Bootcamp
Part VI: Builds & Downloads
http://eclip.se/t
Agenda
- Building: Hudson/HIPP, CBI
- Signing JAR files
- Storage: build artifacts, nightlies, releases
- Using mirrors
- Download statistics
- Cleaning up
- Quiz
Common Build Infrastructure
- Hudson CI
- Git/Gerrit
- Maven/Tycho
- Jar signing
- Nexus (Maven repository)
- http://wiki.eclipse.org/CBI
Make it easy for anyone to build your code!
Hudson & HIPP
• Employs Hudson Continuous Integration
• Shared Hudson w/ Mac and Windows UI slaves
• HIPP: Hudson Instance Per Project
• Limitations
Signing
- JAR files: queued and private web service
- Queued for many files, ZIPs
- Web service for on-the-fly signing, jars only
- Windows executables via private web service
- Mac executables via private web service
- Maven signing plugin: http://eclips.se/u
- http://eclips.se/q
- wiki.eclipse.org/IT_Infrastructure_Doc
Storing builds
- Temporary stores: build artifacts, workspace
- Nightly builds not mirrored
- Stable & Release: mirrored
- Simultaneous Release
- Storage is not unlimited!
- wiki.eclipse.org/IT_Infrastructure_Doc#Builds
- wiki.eclipse.org/Hudson
- Maven: repo.eclipse.org
Download Statistics
• Use Mirrors? Get download stats.
• P2 & mirrors
What to add?
The `p2.mirrorsURL` property has the following structure:
```xml
<property name="p2.mirrorsURL"
```
- replace `{repository_path}` with the path where your artifacts.jar sits on download.eclipse.org.
• P2 & stats
There are two steps to enable p2 download statistics gathering for your repository:
1) In the artifact repository that you want to track downloads from, add a `p2.statsURI` property specifying the statistics URL (in artifact repository).
```
<repository name='Update Site' type='org.eclipse.equinox.p2.artifact.repository.simpleRepository' version='1'>
<properties size='3'>
<property name='p2.timestamp' value='1269575706171'/>
<property name='p2.compressed' value='true'/>
<property name='p2.statsURI' value='http://your.stats.server/stats'/>
</properties>
</repository>
```
Download statistics
- wiki.eclipse.org/Equinox_p2_download_stats
- wiki.eclipse.org/Equinox/p2/p2.mirrorsURL
- Ask cross-project-issues-dev for help!
- No solution yet for Maven/repo.eclipse.org stats
Stats Tool
Partial File Name: /technology/app/downloads/release/kepler/R/eclipse
TIP: For faster results, use a file name that matches the fewest amount of files as possible to satisfy your query. For instance, query using the core file(s) that make up one user download.
Or pick from list: Open the File List
Group multiple files as a single result:
Background this query, send results to this e-mail:
Run query
**Please note:** Using the filter options below causes queries to run against 74,971,942 download records, from 2012-10-22 to 2013-10-22 00:01:01. Today's downloads will be added at midnight Eastern Time.
**Date**
<table>
<thead>
<tr>
<th>Date</th>
<th>?</th>
</tr>
</thead>
<tbody>
<tr>
<td>Custom Date From (blank = "since the beginning")</td>
<td>?</td>
</tr>
<tr>
<td>Custom Date To (blank = "now")</td>
<td>?</td>
</tr>
</tbody>
</table>
TIP: date-based queries take much longer to run when files with tens of thousands of downloads are included.
**Select View**
<table>
<thead>
<tr>
<th>Select View</th>
<th>Grouped by country</th>
</tr>
</thead>
</table>
---
**Results**
Query took 6.062 sec (0.004 connect time)
<table>
<thead>
<tr>
<th>File</th>
<th>Code</th>
<th>Country</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>technology/app/downloads/release/kepler/R/eclipse-standard-kepler-R-win32-x86_64.zip</td>
<td>us</td>
<td>United States</td>
<td>233557</td>
</tr>
<tr>
<td></td>
<td>cn</td>
<td>China</td>
<td>158715</td>
</tr>
<tr>
<td></td>
<td>de</td>
<td>Germany</td>
<td>61775</td>
</tr>
<tr>
<td></td>
<td>in</td>
<td>India</td>
<td>57092</td>
</tr>
<tr>
<td></td>
<td>br</td>
<td>Brazil</td>
<td>45571</td>
</tr>
<tr>
<td></td>
<td>kr</td>
<td>Korea, Republic Of</td>
<td>43784</td>
</tr>
</tbody>
</table>
Committers Only: https://dev.eclipse.org/site_login/myaccount.php
Cleaning up
- Retention policy
- Source code and Bugzilla
- Hudson build artifacts
- Download.eclipse.org
- Older builds: archive.eclipse.org
Quiz
Question 1: Eclipse download servers are permanent storage for...
a) Build Workspaces
b) All my emails and personal backups
c) Nightly builds
d) Production/Release builds
Quiz
Question 1: Eclipse download servers are permanent storage for...
a) Build Workspaces
b) All my emails and personal backups
c) Nightly builds
d) Production/Release builds
Quiz
Question 2: Can I use the Eclipse Foundation signing services to sign personal project plugins?
a) Sure, why not?
b) I think so.
c) Perhaps that is not a good idea.
d) Definitely not.
Quiz
Question 2: Can I use the Eclipse Foundation signing services to sign personal project plugins?
a) Sure, why not?
b) I think so.
c) Perhaps that is not a good idea.
d) Definitely not.
Quiz
Question 3: What are the reasons for enabling mirrors on my downloadable bits?
a) Webmaster loves me for saving resources.
b) Users love me for closer, faster downloads.
c) I get download stats.
d) I will get free beer at EclipseCon.
Quiz
Question 3: What are the reasons for enabling mirrors on my downloadable bits?
a) Webmaster loves me for saving resources.
b) Users love me for closer, faster downloads.
c) I get download stats.
d) I will get free* beer at EclipseCon.
* While there is no guarantee you will get free beer at EclipseCon for using mirrors, your chances are greatly improved.
Evaluate This Session
1. Sign-in: [www.eclipsecon.org](http://www.eclipsecon.org)
2. Select session from schedule
3. Evaluate: +1 0 -1
|
{"Source-Url": "https://www.eclipsecon.org/na2014/sites/default/files/slides/EclipseCommitterBootcampEclipseConNA2014_1.pdf", "len_cl100k_base": 11183, "olmocr-version": "0.1.50", "pdf-total-pages": 158, "total-fallback-pages": 0, "total-input-tokens": 191076, "total-output-tokens": 16904, "length": "2e13", "weborganizer": {"__label__adult": 0.0004634857177734375, "__label__art_design": 0.0008587837219238281, "__label__crime_law": 0.0002720355987548828, "__label__education_jobs": 0.01202392578125, "__label__entertainment": 0.00016951560974121094, "__label__fashion_beauty": 0.0002187490463256836, "__label__finance_business": 0.0007376670837402344, "__label__food_dining": 0.0003190040588378906, "__label__games": 0.0010051727294921875, "__label__hardware": 0.0005974769592285156, "__label__health": 0.0002646446228027344, "__label__history": 0.000377655029296875, "__label__home_hobbies": 0.0002713203430175781, "__label__industrial": 0.00027942657470703125, "__label__literature": 0.0003561973571777344, "__label__politics": 0.0002567768096923828, "__label__religion": 0.0004916191101074219, "__label__science_tech": 0.00267791748046875, "__label__social_life": 0.0005717277526855469, "__label__software": 0.0138397216796875, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.0005025863647460938, "__label__transportation": 0.0003902912139892578, "__label__travel": 0.0003094673156738281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46638, 0.01643]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46638, 0.03712]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46638, 0.77541]], "google_gemma-3-12b-it_contains_pii": [[0, 117, false], [117, 319, null], [319, 3569, null], [3569, 3708, null], [3708, 3837, null], [3837, 4048, null], [4048, 4116, null], [4116, 4388, null], [4388, 4601, null], [4601, 4774, null], [4774, 4978, null], [4978, 5106, null], [5106, 5142, null], [5142, 5307, null], [5307, 5398, null], [5398, 5620, null], [5620, 5784, null], [5784, 5952, null], [5952, 6112, null], [6112, 6306, null], [6306, 6382, null], [6382, 6503, null], [6503, 6972, null], [6972, 7046, null], [7046, 7117, null], [7117, 7203, null], [7203, 7276, null], [7276, 8042, null], [8042, 8167, null], [8167, 9356, null], [9356, 11660, null], [11660, 11780, null], [11780, 12001, null], [12001, 12197, null], [12197, 12944, null], [12944, 13001, null], [13001, 13175, null], [13175, 13269, null], [13269, 13452, null], [13452, 13497, null], [13497, 13535, null], [13535, 13559, null], [13559, 13643, null], [13643, 13665, null], [13665, 14717, null], [14717, 14717, null], [14717, 16969, null], [16969, 17138, null], [17138, 17287, null], [17287, 17427, null], [17427, 17600, null], [17600, 17612, null], [17612, 18091, null], [18091, 18110, null], [18110, 18300, null], [18300, 18481, null], [18481, 18696, null], [18696, 18908, null], [18908, 18919, null], [18919, 19007, null], [19007, 19247, null], [19247, 19434, null], [19434, 19730, null], [19730, 19730, null], [19730, 20075, null], [20075, 20249, null], [20249, 20393, null], [20393, 20428, null], [20428, 21440, null], [21440, 24121, null], [24121, 25042, null], [25042, 25192, null], [25192, 25192, null], [25192, 25398, null], [25398, 25653, null], [25653, 25979, null], [25979, 26224, null], [26224, 26469, null], [26469, 26670, null], [26670, 26865, null], [26865, 27096, null], [27096, 27327, null], [27327, 27427, null], [27427, 27519, null], [27519, 27785, null], [27785, 28068, null], [28068, 28248, null], [28248, 29287, null], [29287, 29317, null], [29317, 29666, null], [29666, 29937, null], [29937, 29937, null], [29937, 30135, null], [30135, 30201, null], [30201, 30381, null], [30381, 30580, null], [30580, 30819, null], [30819, 31254, null], [31254, 31496, null], [31496, 31681, null], [31681, 31797, null], [31797, 32194, null], [32194, 32342, null], [32342, 32523, null], [32523, 32704, null], [32704, 32754, null], [32754, 33348, null], [33348, 33969, null], [33969, 34028, null], [34028, 34205, null], [34205, 34386, null], [34386, 34695, null], [34695, 34773, null], [34773, 34831, null], [34831, 34972, null], [34972, 35164, null], [35164, 35472, null], [35472, 36043, null], [36043, 37003, null], [37003, 37033, null], [37033, 37221, null], [37221, 37382, null], [37382, 37594, null], [37594, 37594, null], [37594, 37615, null], [37615, 37703, null], [37703, 37703, null], [37703, 38093, null], [38093, 38113, null], [38113, 38199, null], [38199, 38234, null], [38234, 38520, null], [38520, 38618, null], [38618, 38734, null], [38734, 38819, null], [38819, 38983, null], [38983, 39122, null], [39122, 39332, null], [39332, 40972, null], [40972, 41138, null], [41138, 41149, null], [41149, 41223, null], [41223, 41387, null], [41387, 41571, null], [41571, 41721, null], [41721, 42049, null], [42049, 42322, null], [42322, 43297, null], [43297, 43499, null], [43499, 45013, null], [45013, 45156, null], [45156, 45334, null], [45334, 45512, null], [45512, 45703, null], [45703, 45894, null], [45894, 46135, null], [46135, 46499, null], [46499, 46638, null]], "google_gemma-3-12b-it_is_public_document": [[0, 117, true], [117, 319, null], [319, 3569, null], [3569, 3708, null], [3708, 3837, null], [3837, 4048, null], [4048, 4116, null], [4116, 4388, null], [4388, 4601, null], [4601, 4774, null], [4774, 4978, null], [4978, 5106, null], [5106, 5142, null], [5142, 5307, null], [5307, 5398, null], [5398, 5620, null], [5620, 5784, null], [5784, 5952, null], [5952, 6112, null], [6112, 6306, null], [6306, 6382, null], [6382, 6503, null], [6503, 6972, null], [6972, 7046, null], [7046, 7117, null], [7117, 7203, null], [7203, 7276, null], [7276, 8042, null], [8042, 8167, null], [8167, 9356, null], [9356, 11660, null], [11660, 11780, null], [11780, 12001, null], [12001, 12197, null], [12197, 12944, null], [12944, 13001, null], [13001, 13175, null], [13175, 13269, null], [13269, 13452, null], [13452, 13497, null], [13497, 13535, null], [13535, 13559, null], [13559, 13643, null], [13643, 13665, null], [13665, 14717, null], [14717, 14717, null], [14717, 16969, null], [16969, 17138, null], [17138, 17287, null], [17287, 17427, null], [17427, 17600, null], [17600, 17612, null], [17612, 18091, null], [18091, 18110, null], [18110, 18300, null], [18300, 18481, null], [18481, 18696, null], [18696, 18908, null], [18908, 18919, null], [18919, 19007, null], [19007, 19247, null], [19247, 19434, null], [19434, 19730, null], [19730, 19730, null], [19730, 20075, null], [20075, 20249, null], [20249, 20393, null], [20393, 20428, null], [20428, 21440, null], [21440, 24121, null], [24121, 25042, null], [25042, 25192, null], [25192, 25192, null], [25192, 25398, null], [25398, 25653, null], [25653, 25979, null], [25979, 26224, null], [26224, 26469, null], [26469, 26670, null], [26670, 26865, null], [26865, 27096, null], [27096, 27327, null], [27327, 27427, null], [27427, 27519, null], [27519, 27785, null], [27785, 28068, null], [28068, 28248, null], [28248, 29287, null], [29287, 29317, null], [29317, 29666, null], [29666, 29937, null], [29937, 29937, null], [29937, 30135, null], [30135, 30201, null], [30201, 30381, null], [30381, 30580, null], [30580, 30819, null], [30819, 31254, null], [31254, 31496, null], [31496, 31681, null], [31681, 31797, null], [31797, 32194, null], [32194, 32342, null], [32342, 32523, null], [32523, 32704, null], [32704, 32754, null], [32754, 33348, null], [33348, 33969, null], [33969, 34028, null], [34028, 34205, null], [34205, 34386, null], [34386, 34695, null], [34695, 34773, null], [34773, 34831, null], [34831, 34972, null], [34972, 35164, null], [35164, 35472, null], [35472, 36043, null], [36043, 37003, null], [37003, 37033, null], [37033, 37221, null], [37221, 37382, null], [37382, 37594, null], [37594, 37594, null], [37594, 37615, null], [37615, 37703, null], [37703, 37703, null], [37703, 38093, null], [38093, 38113, null], [38113, 38199, null], [38199, 38234, null], [38234, 38520, null], [38520, 38618, null], [38618, 38734, null], [38734, 38819, null], [38819, 38983, null], [38983, 39122, null], [39122, 39332, null], [39332, 40972, null], [40972, 41138, null], [41138, 41149, null], [41149, 41223, null], [41223, 41387, null], [41387, 41571, null], [41571, 41721, null], [41721, 42049, null], [42049, 42322, null], [42322, 43297, null], [43297, 43499, null], [43499, 45013, null], [45013, 45156, null], [45156, 45334, null], [45334, 45512, null], [45512, 45703, null], [45703, 45894, null], [45894, 46135, null], [46135, 46499, null], [46499, 46638, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 46638, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46638, null]], "pdf_page_numbers": [[0, 117, 1], [117, 319, 2], [319, 3569, 3], [3569, 3708, 4], [3708, 3837, 5], [3837, 4048, 6], [4048, 4116, 7], [4116, 4388, 8], [4388, 4601, 9], [4601, 4774, 10], [4774, 4978, 11], [4978, 5106, 12], [5106, 5142, 13], [5142, 5307, 14], [5307, 5398, 15], [5398, 5620, 16], [5620, 5784, 17], [5784, 5952, 18], [5952, 6112, 19], [6112, 6306, 20], [6306, 6382, 21], [6382, 6503, 22], [6503, 6972, 23], [6972, 7046, 24], [7046, 7117, 25], [7117, 7203, 26], [7203, 7276, 27], [7276, 8042, 28], [8042, 8167, 29], [8167, 9356, 30], [9356, 11660, 31], [11660, 11780, 32], [11780, 12001, 33], [12001, 12197, 34], [12197, 12944, 35], [12944, 13001, 36], [13001, 13175, 37], [13175, 13269, 38], [13269, 13452, 39], [13452, 13497, 40], [13497, 13535, 41], [13535, 13559, 42], [13559, 13643, 43], [13643, 13665, 44], [13665, 14717, 45], [14717, 14717, 46], [14717, 16969, 47], [16969, 17138, 48], [17138, 17287, 49], [17287, 17427, 50], [17427, 17600, 51], [17600, 17612, 52], [17612, 18091, 53], [18091, 18110, 54], [18110, 18300, 55], [18300, 18481, 56], [18481, 18696, 57], [18696, 18908, 58], [18908, 18919, 59], [18919, 19007, 60], [19007, 19247, 61], [19247, 19434, 62], [19434, 19730, 63], [19730, 19730, 64], [19730, 20075, 65], [20075, 20249, 66], [20249, 20393, 67], [20393, 20428, 68], [20428, 21440, 69], [21440, 24121, 70], [24121, 25042, 71], [25042, 25192, 72], [25192, 25192, 73], [25192, 25398, 74], [25398, 25653, 75], [25653, 25979, 76], [25979, 26224, 77], [26224, 26469, 78], [26469, 26670, 79], [26670, 26865, 80], [26865, 27096, 81], [27096, 27327, 82], [27327, 27427, 83], [27427, 27519, 84], [27519, 27785, 85], [27785, 28068, 86], [28068, 28248, 87], [28248, 29287, 88], [29287, 29317, 89], [29317, 29666, 90], [29666, 29937, 91], [29937, 29937, 92], [29937, 30135, 93], [30135, 30201, 94], [30201, 30381, 95], [30381, 30580, 96], [30580, 30819, 97], [30819, 31254, 98], [31254, 31496, 99], [31496, 31681, 100], [31681, 31797, 101], [31797, 32194, 102], [32194, 32342, 103], [32342, 32523, 104], [32523, 32704, 105], [32704, 32754, 106], [32754, 33348, 107], [33348, 33969, 108], [33969, 34028, 109], [34028, 34205, 110], [34205, 34386, 111], [34386, 34695, 112], [34695, 34773, 113], [34773, 34831, 114], [34831, 34972, 115], [34972, 35164, 116], [35164, 35472, 117], [35472, 36043, 118], [36043, 37003, 119], [37003, 37033, 120], [37033, 37221, 121], [37221, 37382, 122], [37382, 37594, 123], [37594, 37594, 124], [37594, 37615, 125], [37615, 37703, 126], [37703, 37703, 127], [37703, 38093, 128], [38093, 38113, 129], [38113, 38199, 130], [38199, 38234, 131], [38234, 38520, 132], [38520, 38618, 133], [38618, 38734, 134], [38734, 38819, 135], [38819, 38983, 136], [38983, 39122, 137], [39122, 39332, 138], [39332, 40972, 139], [40972, 41138, 140], [41138, 41149, 141], [41149, 41223, 142], [41223, 41387, 143], [41387, 41571, 144], [41571, 41721, 145], [41721, 42049, 146], [42049, 42322, 147], [42322, 43297, 148], [43297, 43499, 149], [43499, 45013, 150], [45013, 45156, 151], [45156, 45334, 152], [45334, 45512, 153], [45512, 45703, 154], [45703, 45894, 155], [45894, 46135, 156], [46135, 46499, 157], [46499, 46638, 158]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46638, 0.0629]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
c93f8b32a179c9bd8386bb62c629c1165c779580
|
Sustainability Design in Requirements Engineering: State of Practice
Ruzanna Chitchyan
University of Leicester
Leicester, UK
rc256@leicester.ac.uk
Christoph Becker
University of Toronto
Toronto, ON, Canada
christoph.becker@utoronto.ca
Stefanie Betz
Karlsruhe Institute of Technology
Karlsruhe, Germany
stefanie.betz@kit.edu
Leticia Duboc
State University of Rio de Janeiro
Rio de Janeiro, Brazil
eticia@ime.uerj.br
Birgit Penzenstadler
California State University
Long Beach
Long Beach, California, USA
bpzenzens@gmail.com
Norbert Seyff
FHNW and University of Zurich
Switzerland
norbert.seyff@fhnw.ch
Colin C. Venters
University of Huddersfield
Huddersfield, UK
c.venters@hud.ac.uk
ABSTRACT
Sustainability is now a major concern in society, but there is little understanding of how it is perceived by software engineering professionals and how sustainability design can become an embedded part of software engineering process. This paper presents the results of a qualitative study exploring requirements engineering practitioners’ perceptions and attitudes towards sustainability. It identifies obstacles and mitigation strategies regarding the application of sustainability design principles in daily work life. The results of this study reveal several factors that can prevent sustainability design from becoming a first class citizen in software engineering: software practitioners tend to have a narrow understanding of the concept of sustainability; organizations show limited awareness of its potential opportunities and benefits; and the norms in the discipline are not conducive to sustainable outcomes. These findings suggest the need for focused efforts in sustainability education, but also a need to rethink professional norms and practices.
Categories and Subject Descriptors
D.2.8 [Software Engineering]: Requirements/Specification
Keywords
Sustainability, Requirements Engineering, Perceptions, Sustainability Design, Obstacles
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
ICSE ’16 Companion, May 14-22, 2016, Austin, TX, USA
© 2016 ACM. ISBN 978-1-4503-4205-6/16/05…$15.00
DOI: http://dx.doi.org/10.1145/2889160.2889217
1. INTRODUCTION
As software systems are increasingly embedded in the social and technical fabric of our society, the role of software engineering (SE) is shifting [1]. From a narrow technical profession that builds software systems, software engineers are emerging as change agents as software technology is increasingly acknowledged as a transformative force in society [2, 3]. The importance of understanding the wider socio-technical systems in which software is embedded has been emphasized in the past decade, foremost in areas such as safety and security [4] [5]. But there is more to it than that in a highly connected world: It is suggested that every line of code has not just financial and technical implications, but also moral and ethical consequences, as software services shape and inform human behaviour [6].
In daily SE practice, decisions are made that directly affect the functional behaviour and system qualities of specific software systems. These decisions have direct and indirect effects on the socio-technical systems into which these software systems are integrated; as well as far-reaching systemic effects accumulated through their longer-term continuous usage. Such effects have been recognized by some of the Codes of Ethics for the software engineering profession, which emphasize the significant opportunity that the developers of these technologies have to do (and influence) good or harm [7].
Sustainability is generally defined as the capacity to endure [8]. This concept interrelates five dimensions [9]: environmental, economic, social, individual, and technical. The environmental dimension refers to the responsible use of natural resources. The economic focuses on assets, capital and added value, which includes wealth creation, prosperity, profitability, capital investment, income, etc. The social one is concerned with societal communities (groups of people, organizations) and the factors that erode trust in society. The individual dimension covers individual freedom and agency. Finally, the technical relates to the endurance of artificial systems.
A rising concern for sustainability has brought the effects of software systems in these dimensions into the spotlight [9, 10, 3]. With this comes increasing questions about how to understand and consider them as part of software engineering. Sustainability design refers to the commitment to treat sustainability as a first-class concern in SE. As a fundamental precondition for the continued existence of a system and a factor that is influencing many system goals. This begins with Requirements Engineering (RE) [3, 11]. However, the adoption of sustainability design practice is under-investigated in the field of SE. It is not yet clear what motivates practitioners to engage in this topic and what holds them back. But if SE as a discipline is to arrive at a new understanding of its role in society, we should start with an investigation into its own perceptions, and how these perceptions influence SE practice.
This paper characterizes the current understanding of sustainability in SE through a qualitative interview study with requirements engineers. We aim to answer two closely related questions: (1) What are the current perceptions and practices of sustainability design in RE practice? (2) What are the challenges perceived by RE practitioners for engaging in sustainability design? We take this as a starting point to identify promising leverage points - effective places of change in the software profession - that would facilitate adoption of sustainability. The focus of our analysis begins with RE, since it has the greatest influence on the sustainability of software systems [11].
In Section 2, we discuss related work to examine why useful SE practices are not adopted. Section 3 presents the design of our interview study. Section 4 presents key findings, and Section 5 examines obstacles to sustainability design and possible interventions. Section 6 discusses limitations of the study. The paper concludes, in Section 7, with a set of research priorities that highlight the interdisciplinary nature of the challenges the discipline is facing.
2. RELATED WORK
Useful practices are often not adopted even when organizations recognize the value of adopting them [12, 13]. In order to draw parallels on the adoption of sustainability design, we examined the SE literature so as to identify why existing good practices are often overlooked and ignored.
It has been recognized that there is a general mismatch between the theory on what should be practiced and the actual practice [14, 15, 16]. However, there is no consensus regarding the underlying reasons as to why such practices have not been widely adopted by the SE community.
Evidence suggests that at the level of an individual, poor adoption is often due to the lack of education and experience. For example, Regev et. al., [18] state that use of good RE practices in industry is hampered by a poor understanding of these practices and their benefits. To address this, they suggest that teaching RE at university level is essential. Similarly, Bull and Whittle [19] argue that SE is a creative process that is fundamentally about designing solutions to problems that require reflection. However, reflective practice is rarely taught explicitly in software engineering education. Moreno et. al., [20] found that the knowledge required to successfully integrate good practices into SE tasks was beyond the classical technical knowledge taught in most undergraduate and graduate SE programs.
Others see the reason of why practices are not widely adopted more as a fault at the side of the researchers. Glass [14] argues that researchers simply do not have the required experience to make their theories the solution of choice. Additionally, Beecham et. al., [15] found that with regards to the Global Software Engineering practices, practitioners perceive the input provided by researchers as potentially useful, but do not read research articles because of their inaccessibility. Moreover, they suggest that a leap of faith is required to apply a theory that has not been proven in practice first. Similarly, the personality of the individual software engineer can hamper the adoption of practices. Riemenschneider et. al., [12] highlight that while many organizations attempt to deploy methodologies intended to improve software development processes there is resistance by individual software developers against using such methodologies, which often obstructs their successful deployment. Toma, Auruma and Vidgena [21] suggest that the mismatch between academia and industry concerning the nature of technical debt increases the risk that intuitively attractive but sub-optimal heuristics may be adopted out of necessity by practitioners. Their study revealed precedents of technical debt to include pragmatism, prioritization, attitudes, ignorance and oversight. However, these precedents are not mutually exclusive and would be expected to manifest in various combinations and weights in different situations.
On the level of professional environment, it is the organizational culture that is believed to strongly influence the adoption of practices. For example, Ahmed et. al., [22] argue that when institutionalizing software product lines within an organization, organizational behavior plays an important role. Additionally, extra costs are named as one of the reasons why certain best practices are not implemented [23]. Lavallee and Robillard [24] also highlight how organizational factors such as structure and culture have an impact on the working conditions of developers. Their preliminary results show that many decisions are made under the pressure of certain organizational factors, which negatively affect software quality.
Finally, on the level of norms in professional practice, it is suggested that there is a need not only to understand the properties and behavior of software, but also the behavior of software engineers, development teams, and organizations [25].
In summary, the literature identifies several levels at which adoption of “proven” useful practices can be hampered. Researchers, practitioners, teams, organizations, and professional practice regulators could all be responsible to a certain degree. But which of these potential forces are relevant in the case of the adoption of sustainability design practices in RE?
3. INTERVIEW STUDY DESIGN
As part of a broader investigation into sustainability design, this paper reports the results of an exploratory qualitative interview study on the current understanding of sustainability and its related practices in the requirements engineering profession. The study design is described below.
1) The Planning stage, the interview questions² were designed collaboratively by all authors. The study was piloted with one interviewee to validate clarity of questions and the interview structure. Given that no major changes were required, this interview was also considered in the analysis following the guidelines in [26].
The first stage of the interview centered on background information, finding out how requirements engineering professionals define sustainability, and on relevant activities they undertake in their daily personal and professional lives. The participants were then asked to read through a brief document outlining principles of sustainability design [27]. The second part of the interview focused on eliciting feedback on if and how the practitioners would conceive to use these principles in their work life and what would be the expected difficulties in their adoption.
2) The Data Collection was undertaken both through in-person interviews and via an online-conferencing software. We interviewed 13 requirements practitioners from 8 countries (Austria, Brazil, Germany, Spain, Switzerland, Turkey, UK, and the USA). All interviewees work in companies, spend at least a third of their time on RE activities, and have a minimum experience of one year full-time or two years part-time in RE. Additionally, the interviewees fulfilled other roles in their companies such as project managers, product managers, and developers. The interviewees (8 male, 5 female) had a mix of educational background (3 PhD, 7 graduate and 3 undergraduate degrees). Their ages ranged between 25 and 59, with 6 interviewees in the 30-39 age bracket. The mix of businesses covered in the study included 3 small (1-49 employees), 6 medium (50-999), 2 large (over 1000 employees), and 2 Enterprise companies (over 5000 employees). The business domains varied from e-Voting to Enterprise Resource Planning, Software as a Service, security, embedded systems, hardware distributors, civil aviation, and energy.
10 of 13 interviews were held in English, 3 in Spanish. All participants were native or fluent in the language of the interview. All interviews were recorded and transcribed in their original language. Spanish transcripts were translated into English for analysis.
3) For Data Analysis, we used the qualitative content analysis method [28] to extract views and perceptions on sustainability from these interview transcripts. A minimum of two analysts read each of the interviews and coded the text with conceptual categories relevant to sustainability perceptions, as well as peer-reviewed each other’s work. An initial set of codes were created by the first coder and was updated with each following coding activity. The initial codebook, as well as the updates, were discussed and agreed upon by all co-authors of this paper, who are also the researchers that worked on the coding task. A web-based text analysis tool [29] was used to support the coding and review process. Within the framework of qualitative content analysis, we used a mixed approach of inductive category development and deductive category application [28, 30].
Key findings of this interview study fall into three sections, as summarized in Table 1, and are discussed in the following sections. We reference the individual interviewees by fictitious names to ensure anonymity.
Table 1: Key areas of findings on 3 levels.
<table>
<thead>
<tr>
<th>Category</th>
<th>Finding</th>
</tr>
</thead>
<tbody>
<tr>
<td>Individual findings</td>
<td>Sustainability as environmental or financial</td>
</tr>
<tr>
<td></td>
<td>Sustainability as separate from SE</td>
</tr>
<tr>
<td></td>
<td>Sustainability as a nice-to-have quality</td>
</tr>
<tr>
<td>The professional environment</td>
<td>Lack of methodological support</td>
</tr>
<tr>
<td></td>
<td>Need for mentality change</td>
</tr>
<tr>
<td></td>
<td>Assumed costs as barrier</td>
</tr>
<tr>
<td></td>
<td>Concerns of small companies</td>
</tr>
<tr>
<td></td>
<td>The role of the customer</td>
</tr>
<tr>
<td></td>
<td>Companies lack time</td>
</tr>
<tr>
<td></td>
<td>Engineers lack management support for it</td>
</tr>
<tr>
<td></td>
<td>Doubts about benefits for business</td>
</tr>
<tr>
<td></td>
<td>Perception of trade-offs and risks</td>
</tr>
<tr>
<td>Norms in SE practice</td>
<td>Project success assessed at delivery only</td>
</tr>
<tr>
<td></td>
<td>Poor communication of sustainability values</td>
</tr>
<tr>
<td></td>
<td>Regulations are drivers for sustainability</td>
</tr>
</tbody>
</table>
4. STUDY FINDINGS
4.1 Individual Findings
What is sustainability about? We observed that only 3 out of 13 interviewees (Ray, Liz, Sam) relate sustainability to its systemic and broad context. Ray noted that sustainability is about allowing humans to “thrive”. Liz - similar to Sam - stated that sustainability is “a general, wide-reaching goal of making human life non-damaging to the planet”, and that this is relevant for the present and the future as well as for individuals and societies. These 3 individuals view sustainability as comprising of environmental, social, individual, and organisational concerns.
In contrast, the perceptions on what sustainability comprises are much more narrow and segmented among the rest of the interviewees, with each of them focusing on one or a few specific topics.
Typically, interviewees perceived sustainability as an issue of natural resources availability and waste reduction. For them, sustainability is about making “the use of non-renewable resources efficient” (Amy) so that the society “can still go on like thousands of years without running out of the resources” (Eve).
Business and its process continuity is seen as another major issue in sustainability. A number of interviewees (Cat, Eve, Pat, Ray) refer to the need for business to be continued in the long run.
Another topic closely related to quality is that of support for change in software, which was the key notion of sustainability for Max. To him sustainability is about “[...] supportability, reusability, maintaining and updating [...]” or in short about “Agility to update.”.
Is sustainability separate from SE? Several interviewees (Ben, Pat, Eve, Jen) explicitly saw sustainability as a separate field from that of Software Engineering. Eve stated, “I am surprised that you are addressing this sustainability issue in the context of SE.” This stems from their notion of sustainability as only “[...] limited to natural resources [...]” (Ben), and the view that things related to sustainability are “[...] perceived as being onerous and it’s not benefiting us as a business [...]” (Pat).
Is sustainability an optional quality? Three interviewees (Jen, Eve, Pat) saw sustainability as a unique selling
²The questions and the codebook for this study can be found at http://sustainabilitydesign.org/2015-interview-study.
point of the software development process, and the software system itself. However, they suggested that sustainability should be considered once other priorities have been established. For instance, Jen stated “right now, when we are developing the software we only consider the performance of the system [...] but on top of it, probably the energy consumption and sustainability requirements might be added.” She then added that “at the end, when it comes to developing the software, you are bound to a paying customer and they should be willing to participate in such an activity [...]”. Ian thought that sustainability would be in competition with other non-functional requirements (NFRs), stating that “if you want to be more sustainable, most of the time you have higher costs, and maybe most of the times other NFR may be less beneficial”. Pat, working in a startup, felt that being sustainable was out of his hands as he rented the office space, so “energy management, waste and stuff like that, is influenced by [...] the policies they [space owners] have in place”. He further noted that as the company grows “hopefully, in the future, we can start doing things in a more sustainable way, from the green perspective”. Thus, sustainability is mostly seen as only an environmental issue which has little to do with the work of software development in the first place. Yet, some think it would be “nice” to consider sustainability as an NFR, if everything else has been addressed.
These misperceptions of sustainability focus purely on the environment and disassociate sustainability from SE. This corresponds to the responses provided on how interviewees address sustainability in their daily private and work lives.
**Actions on Sustainability.** So what do RE practitioners do to support sustainability in their daily private and work lives? The vast majority of the responses on private life actions were about recycling and/or reuse, saving energy by switching off when not in use, reducing water usage, and using public transport or cycling for travel. Max noted the long-term reuse of personal knowledge, and Sam considered issues of community and individual life quality. Ben and Eve reported doing nothing related to sustainability at all.
Similar responses on actions related to sustainability were reported on within the work sphere including reduction of paper use (Jen, Kim, Cat, Liz), reduction of energy use by switching off unused devices, moving to more energy efficient hardware (Jen, Ian, Dan, Liz), use of public transport for work travel (Pat), and reduction of waste from printers (Eve). Two individuals discussed social aspects of sustainability at work, with respect to sustainable work schedule management (Sam) and employee disaster care (Ray). Only Max related sustainability to engineering practice in terms of reusing knowledge for change support and evolution in software.
Two more individuals noted that their organizations pursue sustainability-related certification, either directly (Kim) or through work with clients (Ian). However, they knew very little about the actual implications of this certification, as this had no or little effect on their daily work as requirements engineers.
**Personal Responsibilities.** Some interviewees denied responsibility for sustainability or acknowledged only a very small share in it. For example, Ian stated: “I am trying at least not to be wasteful. I try to avoid too much plastic [...] when it is easily possible.” Cat pointed out that the customer was responsible for the final decisions: “maybe, I can do something that I think is super sustainable and that will go well and such, but if he [the customer] doesn’t have the vision that this is important, [...] it would never be done”.
Several interviewees believe that the decision regarding sustainability should be made by higher management such as executives and project managers (Cat, Ben, Ian). Ian, for example, stated that “[...] it is really a political discussion that should happen on the executive level and it is difficult for a requirements engineer to have an impact there”. Others think it is the customer who needs to request a sustainable system (Eve, Pat). Jen believes the requirements engineer and the software architects (designers) do have the power to actually design sustainability into systems. However, she argues that currently they often only focus on technical aspects. Eve thought that because of limited design possibilities in hand, none of the roles has the capability to make changes for sustainability at the company: “[...] I think we are very limited in our possibilities to change anything”. This also indicated a low sense of personal responsibility in professional life.
### 4.2 The Professional Environment
Our interviewees reported that there are a number of factors in their professional environment that hold them back from engaging in sustainability design.
**Lack of Methodological Support.** Several interviewees (Jen, Amy, Liz, Eve) suggested that they cannot practice sustainability design as it is not supported by the methodologies used in their companies. For instance, Jen says that her company uses a waterfall methodology, but she cannot apply sustainability to her work as “the waterfall lifecycle does not contain any concepts of sustainability.” This is echoed by Amy who stated that “we work with quite clear methodologies in each phase of the project. [...] if it [sustainability] isn’t justified by the methodology, it is difficult to incorporate”.
They further note that there is a general lack of such methodologies in SE. For example, Eve suggested that “there must be much, much more information and techniques and methods available in order to help the developers, REs, project managers and usability engineers”.
**Need for Change of Mentality.** One of the major difficulties in adoption of sustainability design is that it would be difficult to convince people they work with to change their way of thinking. Pat says that “convincing them and getting them to change their way of thinking” will be the key challenge in adopting sustainability in his company. This difficulty is related to the inherent unwillingness to change. Sam and Ray anticipate reactions such as “if we have ever done it in this way, why would we change?” and “how am I going to be reviewed on this?” respectively. But it also comes from the already excessively fast-paced markets. Sam notes that “we are moving forward already at a really fast and maybe even unsustainable speed [...]”. And so, asking people to [...] think about doing things differently while they still have day-to-day goals can be pretty challenging.
Another compounding factor here is (as per Ray, Liz, Cat, Eve) the number of parties involved that need to agree to a change. Liz notes that it is not only about RE professionals, but also about the whole “industries and policy people who have a hard time thinking in those [sustainability] terms”.
Moreover, Ray notes the need to “have people to truly agree on a shared vision of sustainability and work towards it”. This point is confirmed by Kim and Ben, with Ben stating that “the commitment of all the team is necessary for practicing sustainability design in an organization”. Eve highlights that “such a commitment would require awareness by all roles”, which according to her, “is a major obstacle”.
**Economic Constraints and Short-term vs Long-term Trade-off.** An unsurprising factor that made several interviewees reluctant to practice sustainability design concerns the assumed costs of doing so (Ray, Eve, Ian, Amy, Ben and Kim). There is a general underlying assumption that practicing sustainability requires extra work, which inevitably means extra costs. Kim, for example, believes that money will need to be spent in “making people understand and getting the stakeholder involved”, while Amy expects “the extra costs to be incurred in the system analysis or implementation”. Interestingly, Amy believes that “the cost would not be a problem if it was justified by the methodology”. Similarly, Eve is concerned with the extra costs and risks when “you add some functionality only for sustainability purposes”, which suggests that sustainability itself is not a good enough reason for the extra work.
Even when interviewees see the potential gain from sustainability engineering, they may still feel unable to commit to it due to additional initial investment needs. Thus, Ben notes that “what one is looking for is to make the most money in the shortest time possible [...]. If we want to implement or adopt sustainability in our company [...] we have to make an initial effort, or we have to invest time, resources and money to later collect the rewards”. He then suggests that “this requires agreement from many actors within the company, which is not an easy thing to achieve”.
**Small Company Concerns: Client’s Satisfaction and Costs.** Pat, Max and Ian work in small companies with under 50 employees. They highlighted that the key priorities in their work life are focused on good relationships with their clients. This means that the companies are very responsive to the customer requests, in terms of delivery time, acceptance of customer viewpoints, and costs. For instance, Pat notes that his company is “[...] based on being reactive, it’s about building a relationship with these clients and customers and it’s a way is [...] you know [...] impressing them”. While Ian and Max agree that though sustainability is a worthwhile cause, they would rather leave it up to the customer to prioritize it. As stated by Max, “you’ve got to shy off pushing this too much by becoming an evangelist if you’re pushing against an emotively held big belief of the customer because you would just never make a sale.” Moreover, all three interviewees were concerned about the potential loss of clients due to costs. Ian states, “it would cost more and it might be cheaper, for our customers, to switch to another partner, who is not in this topic and don’t care about sustainability, but just doing their job in the cheapest way”.
**Limited resource availability** is another issue raised by the small companies. Pat, for example, stated that sustainability design “would require us to do extra things which we do not have resources for”. This point is closely related to the cost argument, but considered from the manpower and skill availability perspective - small companies do not have access to either on short notices.
**Stakeholder for Sustainability Requirements.** Possibly as a consequence of the importance of customer satisfaction to companies, some interviewees (Cat, Eve, Ian and Jen) clearly indicated that sustainability design must be either driven or approved by the customer. Ian, for example, states that “his company likes to work in a sustainable way, but asks “whether their customers also put a high priority on sustainability”.
This belief comes partially from the underlying assumption that the customer will have to pay extra for sustainability design (Eve, Ian, Jen). Eve, for example, stated that “addressing this issue requires extra work and this extra work has to be paid by someone – the customer”. Several of the interviewees (Cat, Eve and Jen) think that if the customer is not interested in sustainability then the company is left with no choice but to avoid it. This is clearly stated by Cat who said that “the customer is asking me this, I know it will not be sustainable, but I have to deliver this now because it’s what he wants”. Ian, on the other hand, believes that his company has “the power to make the customer aware that sustainability is important for him and the corporate business image”, and therefore worth pursuing.
**Lack of Time in Companies.** Some interviewees (Ben, Cat, Pat, Amy) commented on lack of time as a key factor preventing them from practicing sustainability design. This issue is clearly voiced by Cat, “as there is no time, you do what you can. And perhaps this [sustainability design] is pushed down” and “it gets forgotten there in a corner”. This same interviewee states that when the customer asks for something unsustainable, the company cannot waste time in reasoning about it, but will simply implement it and “yet is the customer’s problem”. “Deep down everyone wants to do well, but there is no time”, says Cat. Amy agrees, “it is not intentional, it is because of specific needs of projects that, unfortunately, [engineers] do not usually have this [time]”.
**Lack of Management Support.** Organizations are typically structured in hierarchies, which can make individuals in lower levels feel unable to make bigger changes without management approval. This view was very clear in several of our interviews (Amy, Ben, Cat, Dan, Eve, Jen). Cat noted that if her manager does not share her ideology, her sustainability ideas might never be prioritized and implemented. Amy agrees that sustainability needs to “be supported from above [the directive layers] so that this is understood as part of the company”. However, convincing the high management of the need for sustainability is a tough challenge and cannot be done without proof of extra financial resources (Ben, Dan, Jen). Ben, for example, says he “would need a deeper study of both the situation and of the benefits [...] to talk to my managers”, while Jen states humorously “if it brings more customers or it brings more money, it would be easy. Like always”.
**Doubts about the benefits for business.** Three interviewees were skeptical about the benefits that sustainability could bring to businesses. Pat compares sustainability design with form filling and says “it’s not benefiting us as a business”. Jen fully agrees. Similarly, Kim believes that even though “software can do a lot to bring more sustainability, [...] some software just don’t have anything to do with sustainability”.
**Requirements Trade-offs and Risks.** Finally, some interviewees had implementation concerns with respect to sustainability. Ian believed that sustainability competes with other requirements. He exemplifies that redundancy is needed
for safety, but it also requires more resources and power.
Eve and Kim thought that sustainability may impose risks. Eve notes that “when you add some functionality only for sustainability purpose, of course, there is [...] extra risk for an error somewhere in the system”. Kim, on the other hand, took the viewpoint of the customer, reasoning that a system change driven by sustainability could not be implemented if it had a negative effect on the customer.
Typical beliefs at the organizational level are summarized in Table 2.
Table 2: Needs in Professional Environment
<table>
<thead>
<tr>
<th>Sustainability needs...</th>
<th>So organizations need...</th>
</tr>
</thead>
<tbody>
<tr>
<td>to be part of SE method-</td>
<td>to adopt new methodologies</td>
</tr>
<tr>
<td>ology</td>
<td></td>
</tr>
<tr>
<td>a change of mentality</td>
<td>to invest into vision building</td>
</tr>
<tr>
<td>investment</td>
<td>and training</td>
</tr>
<tr>
<td>to be considered for all</td>
<td>to commit resources</td>
</tr>
<tr>
<td>software</td>
<td>stakeholders to ask for it</td>
</tr>
<tr>
<td>be considered beneficial</td>
<td>demonstrated business benefits</td>
</tr>
<tr>
<td>time commitment</td>
<td>time saving alternatives</td>
</tr>
<tr>
<td>management support</td>
<td>proof of utility to management</td>
</tr>
</tbody>
</table>
4.3 Norms in Professional Practice
We observe a clear influence that the current professional practice guidelines and norms3 have on the practice of sustainability amongst the RE practitioners. The influence of these norms and guidelines transpires through a number of avenues, some of which are discussed below:
Fixed Point in Assessment of Project Success. Many of the presently practiced software engineering methodologies advocate for a clear project completion point. If the project is delivered on time, within budget, and is accepted by the client - the project is deemed to be a complete success. As stated by Ben “once this solution has been delivered and executed, we stop having influence on how the client will use it or as the client wants to take it”. In other words, at this point the interviewee feels convinced that his job is well done and completed; the responsibility of the software developing organization is considered to be discharged.
This point is also observed by Ian. He comments in the second stage of the interview: “I think it is upfront sometimes difficult to forecast how sustainable something really is and over time once [...] everything is deployed, there will be more concrete data available, which can then, in turn, be very useful for fine tuning and optimizing and maybe even correcting some of the requirements. And that of course could, along with awareness, also have a positive impact on sustainability.”
Poor Communication of Sustainability and Certification Values. The comments of our interviewees suggest that in many companies, there is little awareness of the systemic nature of sustainability values, little communication across professional boundaries, and little assistance provided to software engineers to support their understanding of sustainability issues. Although several companies promote reduction of waste, recycling, paperless operations, use of public transport for travel and alike, these sustainability-supporting behaviors remain external and disjointed from the daily core work of software engineering. Even though Kim is employed in a company which is sustainability certified and Ian is employed in a company that is working towards such certification, neither of them quite know what such certification is about (except for switching off and no paper printing policies). The certification has no effect on their own professional practice.
External Standards and Regulations. Investors require requirements and enforced regulations and legislations drive organizations to engage with sustainability. For instance, Pat notes that, despite his company’s priorities on economic growth, they have to account for their environmental (CO2 emissions) and social impact (job creation) due to investors driven by the EU regulations.
Table 3 exemplifies some of the interviewees beliefs about organization norms.
Table 3: Professional Norms
<table>
<thead>
<tr>
<th>Norms need to ...</th>
<th>because sustainability needs to...</th>
</tr>
</thead>
<tbody>
<tr>
<td>promote long-term re-assessment and re-evaluation practice</td>
<td>be evaluated over time</td>
</tr>
<tr>
<td>define tasks and obligations in each SE role</td>
<td>have an advocate</td>
</tr>
<tr>
<td>promote responsibility</td>
<td>be regulated</td>
</tr>
</tbody>
</table>
5. OBSTACLES AND INTERVENTIONS
When asked if they would personally support sustainability design in their institutions, all thirteen interviewees were unanimously fully supportive. Yet, each noted a number of areas which, in their perspective, would make sustainability design adoption difficult. It is interesting to note that some of the issues raised by our interviewees have indeed been identified and observed in previous research work on new practice adoption studies (see section 2). This study did not attempt to introduce real change into software engineering practice, but instead invited practicing requirements engineers to consider obstacles to adoption of sustainability design. The stimulative findings and analysis results from this study are summarized in Table 4 and discussed in the following sub-sections.
5.1 Individual Resistance, Lack of Education
An innate human characteristic is resistance to uninvited change; it is previously noted to cause difficulties in adoption of new practices in software engineering [31]. Our interviewees explicitly and implicitly noted a number of areas where such resistance to change could be expected.
The issue of individual resistance to change was explicitly noted by Ray and Sam, who say that individuals: (i) do not like to change their habitual practice if they do not see an urgent need to do so, and (ii) are already too stressed and will be concerned about implications of change on their
---
3We interpret “norm” as general agreement within the SE profession on what a software professional should be obliged, permitted, or expected to do.
own performance and work-load. Since all thirteen interviewees also agreed that sustainability will require extra (unwelcome) work, to some degree, they also all implicitly resisted the idea of change. Some also explicitly passed the responsibility over to others (e.g., managers, companies, policy makers), rather than expressing willingness to take it upon themselves. Furthermore, several interviewees noted that to ensure success of this endeavour, a substantial commitment into consensus building and world-view change is required across team members, various teams, stakeholders, and management. While each interviewee was personally supportive, they implied that such an endeavor, clearly, was not a job for a single requirements engineer or even their small team.
We observe a clear relationship between the knowledge sources on sustainability used along with the work experience of those interviewed for this study, and the depth and breadth of their perception on sustainability. Those interviewees whose knowledge sources are limited to news (Amy, Ben, Cat) or news and some discussion (Dan) have a rather narrow perception of sustainability, limiting it mostly to the topic of environmental impact and resource use. Indeed, the environmental topics of sustainability are the ones most often discussed in news, while social, ethical, and individual topics are most often neglected. A limited set of sources of knowledge on a subject also reflects the low inter-
Table 4: Obstacles and Intervention Strategies
<table>
<thead>
<tr>
<th>Level</th>
<th>Obstacle</th>
<th>Mitigation Strategy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Individual</td>
<td>Lack of Knowledge</td>
<td>Education</td>
</tr>
<tr>
<td></td>
<td>Lack of Experience</td>
<td>Training</td>
</tr>
<tr>
<td></td>
<td>Lack of Methodology and Tool Support</td>
<td>Demonstrators of current methodology and tool applicability; New tool and methodology development</td>
</tr>
<tr>
<td></td>
<td>Resistance to Change</td>
<td>Education on need for Change; Motivation for change adoption</td>
</tr>
<tr>
<td></td>
<td>Fear of Unknown due to Change</td>
<td>Clear evaluation and assessment timelines, criteria, and support provision</td>
</tr>
<tr>
<td>Professional Environment</td>
<td>Lack of Higher Management Support</td>
<td>Education, Demonstrators of benefits from Software Engineering leadership</td>
</tr>
<tr>
<td></td>
<td>Reliance on Customer for Sustainability Requests</td>
<td>Demonstrators of win-win solutions</td>
</tr>
<tr>
<td></td>
<td>Tradeoffs: Sustainability vs. NFRs</td>
<td>Stepwise transition support for risk reduction; A roadman with strategies, methodologies, sample case studies</td>
</tr>
<tr>
<td></td>
<td>Risk due to change</td>
<td>Sustainable design into current practice within the available resources; Stepwise transition plans</td>
</tr>
<tr>
<td></td>
<td>Fear of client and income loss</td>
<td>Sustainable design into current practice within the available resources; Stepwise transition plans</td>
</tr>
<tr>
<td></td>
<td>Unavailable Time and Resource</td>
<td>Sustainable design into current practice within the available resources; Stepwise transition plans</td>
</tr>
<tr>
<td></td>
<td>Short-termism and income focus</td>
<td>Sustainable design into current practice within the available resources; Stepwise transition plans</td>
</tr>
<tr>
<td></td>
<td>Poor Communication of Sustainability Values</td>
<td>Sustainable design into current practice within the available resources; Stepwise transition plans</td>
</tr>
<tr>
<td>Professional Practice</td>
<td>Lack of responsibility for long term consequence of software,</td>
<td>Review of and integration of sustainability principles within the professional standards, guidance, and accreditation criteria</td>
</tr>
<tr>
<td></td>
<td>Sustainability as fundamental ground for software acceptance,</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Integration of sustainability requirements into SE guidance and practice standards</td>
<td></td>
</tr>
</tbody>
</table>
have a strong environmental focus such as reducing waste and recycling.
Many of those interviewed say that they are unable to practice Sustainability Design within Software Engineering due to the lack of methodology and tool support (see Section 4.2). While it has been demonstrated [32] that in many cases sustainability can be supported through use of the present RE techniques and tools, the interviewed RE practitioners did not show any awareness of this. Thus, it is not only the absence of tools and techniques that hampers the practice, but the lack of knowledge about what sustainability is and how to support it within the current RE practice methodologies and tools.
Indeed, before the second stage of this interview study, we requested that the interviewees read a short document on sustainability [27] and then reflect on how they could integrate the notions of sustainability from the document into their practice. The reading of this two-page document was sufficient for most of the interviewees to form a broader, more inclusive view of sustainability as a subject, and to conceive practical steps for integrating sustainability design into their professional practice. In short, our findings confirm the proposition (see Section 2) that lack of education and experience regarding a discipline can have a negative effect on the actual practice. Therefore, it is necessary to educate RE practitioners on the subject of sustainability design through formal education (e.g., university degrees), practice guidelines, demonstrative examples/case studies, and alike.
5.2 Professional Environment: Organizational Culture
The findings from our study corroborate previous work that identified organizational culture as a key factor in the adoption of good practice. The most obvious example for this is the uniform response from the small business rep-
representatives (Pat, Max, Ian as discussed in 4.2). All three thought that sustainability-related practices are unsuitable for small businesses, as they must be reactive and immediately responsive to the customer needs and have no time or resources to spare. This is mirrored by a recent survey that concluded, "Given the narrow view that people have of sustainability, it is not surprising to hear such opinions. Gladly, corporate mentality towards sustainability is changing and CEOs are increasingly recognizing the importance of sustainability to the success of their business [33]."
This clearly is a matter of culture within software start-ups. Data from new start-up businesses suggests that those with a social enterprise and community benefit focus are more resilient, and more likely to survive than those without [34]. For instance, Pat, who represents a start-up that works on market analysis, would be able to demonstrate the opportunities for the increased customer base through appeal to increasingly environmentally aware customers or competitiveness of the client’s business through engagement with sustainability. Yet, to Pat, this has not been requested by the client and so is not worth pursuing. Interestingly, Pat also admits that other good practice guidelines that have proven long-term benefit to the software companies, such as adequate documentation and change management, are lacking in this start-up due to the same focus on reactivity and short-term survival.
Software organizations have a strong focus on satisfying customer requirements (see Section 4.2). All that is engineered within the software must be requested by and paid for by the customer. Indeed, the customer has to agree on what software they are paying for. However, it is also well recognized that often the customer is not clear on their real requirements [35]; it is the responsibility of the requirements engineer to help identify, clarify, and agree upon the actual requirements with the customer. Should the methodology adopted by an organization insist on requirements analyst identifying and discussing sustainability requirements with the customer, it is very likely that (at least some) such requirements will make into the list of what the customer asks. This, in turn, would require for an organization to either have a clear priority for its own sustainability values, or be forced to prioritize these through external standards and regulations.
The majority of the interviewees bluntly stated that implementing sustainability design requires extra costs, which the companies are not able or willing to pay. This corroborates the findings from the related work on good practice adoption (Section 2) that extra costs hamper implementation of good practices in industry.
Our interviewees name several reasons explaining as to why the companies are unwilling to undertake these extra costs. First, they fear that the client will not pay extra for sustainability (if they have not asked for it) and will instead choose another, cheaper vendor. Thus, not only extra costs will be incurred, but the client will also be lost. Second, they note that the companies do not have the time and resources to commit to sustainability design since sustainability is often considered an optional extra property, rather than a basic foundation of software operation. Furthermore, even if one could see the potential of some future gains from investing into sustainability, such future-focus is not commonly valued, the companies want to make the most money in return for their resource investments in the shortest time. Finally, adoption of sustainability design is likely to require substantial organizational change with costs associated with staff training and education, and building a shared vision and practice amongst all members of development teams as well as management and policy makers.
While the risks and the costs of this new practice are commonly perceived to be very real and present, the potential gains from it seem still unproven and removed in time.
5.3 Norms and Practices, Regulations and Responsibilities
The current standards and regulations for software professional practice do little to promote sustainability practice within software organisations, focusing only on avoiding intentional and immediate harm [36] through software design. Software effects often do not manifest until a period of continuous use (e.g. effects of Facebook or Twitter). Thus, it has to be recognized that the software development organizations are responsible for the longer-term effect that their software delivered to their user communities.
Yet, today the focus is clearly singly on the immediate impact. If the project is delivered on time, within budget, and with the quality accepted by the customer, the work of the developing company is often considered to be completed and the responsibility delegated to the customer. But, if the success or failure of a software project is measured at a fixed point in time (i.e., handover date in the current practice), the indirect and systemic effects of the software systems will be externalized by the developer companies to the responsibility of the customer. The responsibility for some of these indirect effects can be passed back to the developer companies if the longer term adaptive maintenance contract is linked to the initial system delivery cost, or if the software use is provided as a service by the software company.
It is also not surprising that a substantial shift from owning to leasing software services is already under way as more customers move to Software/Platform as a Service business models. However, so far it has mainly been driven by economic and usability factors. The explicit focus on environmental and social concerns that materialize in indirect and systemic effects of software systems are still largely overlooked [11]. Yet, the SE profession must assume responsibility for the longer-term results of their developments.
To contrast the software industry practice, the UK Standard for Professional Engineering Competence [37], for instance, defines specific sustainability-focused competencies and commitments for each role of a professional engineer. Here a professional needs to “undertake engineering activities in a way that contributes to sustainable development”, including the “ability to […] progress environmental, social and economic outcomes simultaneously” [37]. Such explicit commitment to sustainability, as well as resumption of longer term responsibility for one’s work is presently amiss within software organizations and their regulating and guiding bodies.
6. LIMITATIONS
In this section, we discuss four threats to validity: construct validity, internal validity, external validity, and reliability.
Reactive bias to the presence of a researcher can cause
a threat to the construct validity, which can be exacerbated by different researchers conducting the interviews. To reduce that threat, interviewees have been assured their anonymity and we use open questions in the interviews as a way to reduce interviewer bias [38]. Similarly, an interview guideline had been agreed upon by all interviewers and followed after the first pilot interview. A relevant threat to construct validity is that interviewees may not understand the questions, and the interviewer may misinterpret data. To mitigate this threat, we ensured that the interviewees had sufficient prior experience in RE; further on, to provide a context for the questions, we asked the interviewees to read a brief document on sustainability design before the second stage of the interview started\(^1\). In addition, we piloted the interview to make sure that the questions were clearly stated. Furthermore, the interviews were taped allowing the researchers to listen to the interviews again to limit misinterpretation. Transcripts were passed to the interviewees for comments and corrections, and no corrections or changes were suggested by interviewees. Coding of the first interview was conducted with all of the core coding group participating. The following coding was then conducted pairwise with at least one member of the core coding group taking part.
Confounding factors influencing the analysis are a major threat to internal validity. To mitigate this threat we applied qualitative analysis techniques. Additionally, we do not claim that we collected any other data but that for practitioners perceptions and attitudes related to sustainability, and how these may shift when sustainability design is considered. Also, to allow for future comparison across studies, all selected practitioners had a defined level of experience in RE. Nevertheless, treat of the confounding factors cannot be ruled out completely.
Considering external validity, the cases presented here are not statistically representative and should not be taken as such: this is a qualitative study, and statistical generalization is not our goal. Instead, we are concerned with analytical generalization [38]. Our explorative, qualitative study was designed to help us identify current perceived obstacles and possible mitigation strategies to enable sustainability design. By selecting people with a sufficient amount of experience in requirements engineering, different application domains, countries and company sizes, we focused on the collection of a rich set of data.
To mitigate threats to reliability due to interpretation in qualitative analysis, coding was done first in a team and then pairwise. Any mapping disagreements were discussed until consensus was reached.
7. CONCLUSIONS
This paper reported on a study of the current state of sustainability in RE practice. We investigated current perceptions and attitudes on sustainability in RE practice and assessed whether they reflect the full scope of Sustainability Design. We identified barriers to the engagement with Sustainability Design in RE practice and identified possible interventions. Finally, we compared this to non-adoption of good practices discussed in our literature review.
On an individual level, we found a lack of knowledge and understanding, on the professional environment level there was a lack of support, and in the norms on professional practice there is a lack of responsibility. These key aspects are shown in more detail in Table 4. These obstacles direct us to the interventions points of education (on every level), integration of sustainability principles (on every level), and the need for success stories to demonstrate win-win solutions. For the latter, we need longitudinal case studies with a common design, to be replicated across different application domains. A simple design would be “apply sustainability principles in real projects and see how this is reflected in existing success measures”. The lack of a control group of course makes it challenging to draw firm conclusions. A shared knowledge base would contribute to increasing the visibility of the opportunities.
If we take seriously that “every line of code has a moral and ethical implication” [6], we accept that designers of systems are at least partially responsible for their effects on societies and on the environment. Education presents a key avenue for improvement. We need to include sustainability principles in software engineering courses, educate software customers about sustainability within requirements elicitation, and educate software users about the choices they are making.
In addition, we have identified a number of research priorities that highlight the interdisciplinary nature of the challenges the SE discipline is facing. These research priorities are an integration of sustainability design principles with requirements engineering and software design, a common case study design replication across different domains, and a rework of the ethics standard for software engineering to include the responsibility for sustainability including towards society and the environment.
Significant barriers remain to overcome before Software Engineering can claim to routinely advance not just technical and economic, but also social, individual and environmental needs simultaneously. Critical reflection is needed at the individual, organizational and community level to advance the profession’s ability and commitment to do so.
Acknowledgements
This work is supported by FAPERJ (210.551/2015), CNPQ (14/2014), the European Social Fund, Ministry of Science, Research and the Arts Baden-Württemberg, and WWTF through project BenchmarkDP (ICT2012-46). Thanks to Steve M. Easterbrook for comments and revisions. A special thank you to our friend and colleague Sedef Akinli Kocak, Ph.D. student at Ryerson University, Canada, for her contributions to this paper.
8. REFERENCES
\(^1\)As the first stage was focused on own perception elicitation, the reading request was post-first stage.
|
{"Source-Url": "http://www.cs.le.ac.uk/people/rc256/icse_seis2016_chitchyan_etal.pdf", "len_cl100k_base": 11237, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33205, "total-output-tokens": 13371, "length": "2e13", "weborganizer": {"__label__adult": 0.0005202293395996094, "__label__art_design": 0.0011053085327148438, "__label__crime_law": 0.0004355907440185547, "__label__education_jobs": 0.00972747802734375, "__label__entertainment": 9.09566879272461e-05, "__label__fashion_beauty": 0.0002703666687011719, "__label__finance_business": 0.0009608268737792968, "__label__food_dining": 0.0004069805145263672, "__label__games": 0.0008273124694824219, "__label__hardware": 0.0006289482116699219, "__label__health": 0.0006513595581054688, "__label__history": 0.0003483295440673828, "__label__home_hobbies": 0.00011354684829711914, "__label__industrial": 0.0004718303680419922, "__label__literature": 0.0005235671997070312, "__label__politics": 0.0004088878631591797, "__label__religion": 0.0006008148193359375, "__label__science_tech": 0.0169677734375, "__label__social_life": 0.00017714500427246094, "__label__software": 0.00579071044921875, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.0003616809844970703, "__label__transportation": 0.0005779266357421875, "__label__travel": 0.00021314620971679688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64981, 0.02255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64981, 0.16351]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64981, 0.95125]], "google_gemma-3-12b-it_contains_pii": [[0, 4831, false], [4831, 11499, null], [11499, 19030, null], [19030, 26038, null], [26038, 33224, null], [33224, 39341, null], [39341, 46318, null], [46318, 53189, null], [53189, 59754, null], [59754, 64981, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4831, true], [4831, 11499, null], [11499, 19030, null], [19030, 26038, null], [26038, 33224, null], [33224, 39341, null], [39341, 46318, null], [46318, 53189, null], [53189, 59754, null], [59754, 64981, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64981, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64981, null]], "pdf_page_numbers": [[0, 4831, 1], [4831, 11499, 2], [11499, 19030, 3], [19030, 26038, 4], [26038, 33224, 5], [33224, 39341, 6], [39341, 46318, 7], [46318, 53189, 8], [53189, 59754, 9], [59754, 64981, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64981, 0.21074]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
c9331c9957336783e2422f6487207c53c84ca4f6
|
Fit for purpose: Hybrid Cloud Platform Technologies
By IBM Global Industries, Financial Services
March 2023
Executive Summary
For our Financial Services clients, there are other imperatives that we need to incorporate:
- **Financial Metrics:** All firms focus on expense compression and driving profitable revenue growth.
- **Regulation and Compliance:** Every financial services firm needs to understand, implement, test, audit, and comply with regulations. Technology helps automate regulatory challenges.
- **Cyber-Security:** While hybrid technologies incorporate a consistent security model, everything about the platforms in financial services needs to be built around a Zero Trust Security-First architecture.
- **Digital Transformation:** Financial Services firms are reinventing themselves to offer digital services across every line of business. Without new thinking, the transformation journeys are unpredictable at best.
- **Automation:** Firms are full of manual business and technology processes run and monitored by human operations teams. Manual processes create fragmented customer and employee experiences.
- **Risk:** The risk landscape has become ever more complex, both in a regulatory context but also shifting the view of risk from being a necessity to a business differentiator.
- **Disruption:** Just as emergent ecosystems have disrupted entire industries, the same will proceed in financial services. Clients have the choice to create or participate in ecosystems that extend the value of an institution’s services directly into commonly used business platforms, such as ERP and Payments systems.
- **Embracing Data to create AI-infused insight:** Accessing dark pools of siloed data, reinventing data governance, and leveraging self-service data science to build trusted and transparent insights at scale is the goal of every firm.
---
This paper aims to help every level of executive and practitioner understand the definition, scope, and value of hybrid technologies. Digital Transformation and Hybrid Cloud are now ingrained parts of our 2020s vocabulary. In incumbent businesses seeking to modernize in response to disruptive forces like the pandemic or new competitors, digital transformation and hybrid cloud is part of every technology approach. We throw these words around in everyday conversation as if their meaning is self-explanatory. IBM’s competitors have sought to define simple terminology to meet their business objectives, for example, the ability to run workloads in multiple physical locations. One example is the false equivalence of terms like Multicloud and Hybrid interchangeably. Almost every firm consumes or deploys cloud services from more than one provider. That approach is undoubtedly multicloud but is only partially hybrid. To be fully hybrid, a firm must embrace new mindsets, principles, architecture, ways of working, and tools.
At IBM, our broader definition of Hybrid includes the following:
- **Open** – The hybrid cloud runs on open frameworks that are both generic and industry-specific
- **Hybrid** – Most businesses are already in multiple environments on-prem, in clouds, and at the edge. Software layers secure and operate these environments automatically and consistently.
- **Multi-Cloud** – Embracing multiple public clouds typically means embracing cost and complexity. Public clouds inspire us to think of cloud as a mindset and a new way of working rather than a specific location. The new techniques include Agile development, DevOps, and Site Reliability Engineering (SRE).
- **AI** – Clients need to train and deploy AI at scale with trust and transparency. AI is also necessary to observe and run environments.
We assert that a hybrid approach accelerates value creation by adopting new processes and technology frameworks, each of which is considered. As clients review their incumbent estate and plan for their adoption journey, they must consider the specific economics (ROI and Technical Debt) and known pathways to value. Doing so will help avoid a technology strategy built upon shiny objects. Instead, a program will be built on defined outcomes and predictable experiences. Every business needs to make fit-for-purpose decisions about where technology runs. Again, considering fit-for-purpose in functional, non-functional, and regulatory requirements shapes the outcome. Irrespective of a specific platform, clients need foundational platform-agnostic hybrid cloud accelerators like Red Hat OpenShift. No company adopts a hybrid approach by abandoning incumbent investments such as IBM Z, a full participant in the hybrid journey. IBM Z enables multiple modernization approaches with integration to the IBM Cloud Paks and Zero Trust security to accelerate delivering new digital client experiences with all necessary Zero Trust security and controls.
—
Environmental and Social Governance: a central theme for the coming decade, institutions need to show their appropriate stewardship of the earth’s resources, as well as ensure diversity and inclusion in both their staff, vendors, and product mix.
Written by
John J Duigenan
The Primary Value in Cloud Comes from a “Mindset” and a “New Way of Working”
Given that we have concerns about the typical usage of the word “Cloud,” let’s be very specific about its benefit. Early beliefs suggested that the adoption of elastic cloud infrastructure would reduce costs. However, customers of cloud providers saw their planned expenses go up by as much as 300% due to unforeseen billing and their inability to abandon their existing incumbent technology investments. At face value, the cost increase is prohibitive. However, rather than anchor in the infrastructure cost, the calculation must be redirected to the value created from a new mindset, a new way of working, and new tools. The assertion is that new ways of working can radically accelerate and simplify software development, delivery, and operations. IBM’s experience with clients demonstrates that value is accelerated 2.5 times compared to a public cloud-only approach when using a hybrid approach.
Hybrid Technologies are Not Monolithic
Hybrid Technologies are Not Monolithic: Monolithic technologies are rigid and inflexible. Hybrid Technologies are de-coupled. The foundational construct within hybrid technology is the combination of API and Microservice:
— An API is an application programming interface, in other words, the contract definition for how a distinct function is called. Invoking an API call is the primary way developers call a code function. An example of an API call might be to get an account’s balance or initiate a transfer between accounts. The API definition specifies both the processing location of a service and the definition of invocation and response parameter values. A prior way of thinking about API calls was an RPC or remote procedure call.
— A Microservice is the implementation of code in support of an API definition. The microservice can be implemented in any programming language. It can be deployed to any kind of server or platform environment anywhere, providing it supports API calls. Microservices are intended to encapsulate very distinct, discrete, and de-coupled business functions. Whereas monolithic code combines user interface, business processing, and data functions, each of these must be distinct in well-formed microservices. De-coupling is done to promote and enable the re-use of business and data functions across a code base for multiple delivery channels. Microservices are intended to be stateless; in other words, a specific instance of a microservice does not retain stateful session data; this enables multiple instances of a microservice to run at once, enabling scalability. A microservice may call other microservices via API calls.
Hybrid Technologies are Built on Open Standards
Technology organizations have evolved to embrace code, APIs, and frameworks based upon open standards. There is a strong belief that open standards serve multiple interest parties and foster innovation through open collaboration. The following open technologies are common elements of hybrid technologies:
**API Standards**: Clients can adopt standards-based APIs to avoid the burden of architecting and implementing a custom API framework. Using a standards-based API assists interoperability when interfacing with multiple related components. Examples of open API standards include those offered by the Banking Industry Architecture Network or BIAN (https://www.bian.org), an organization comprised of multiple financial institutions and their technology providers. BIAN aims to provide API and data-model coverage across a universe of banking-specific functions. Another example of open standards in financial services is the “Payment Service Directive” related to the European Commission law opening all banks’ balance and payment functions via API calls. All European banks were required to implement these APIs.
**Containers**: A container packages the executable functions for a microservice. In the past, a typical packaging vehicle was the virtual machine image. Virtual machine images contain ALL required executables and configuration assets, including operating system, client frameworks for security and monitoring, middleware, database, and application runtimes. While virtual machines provide efficiency, running them consumes excessive system resources given the duplication of assets needed for each environment. In contrast, a lightweight container image contains only the code assets and libraries required to run a microservice. A container runs within a host operating system and therefore does not need to package operating system services. Each running container is isolated from others by design. This is partially to ensure that one container cannot attack another. Once built, the configuration of a container can be immutable; in other words, the configuration cannot be changed. This is a valuable characteristic as immutability prevents attacks based upon the ability to manipulate a configuration. Containers are packed with a deployment descriptor file that describes the configuration of a microservice packaged within a container. Container images will standardize to a format defined by the Open Container Initiative. Container technologies and standards are platform-neutral.
**Container Hosting and Orchestration**: A hypervisor runs multiple machine images simultaneously in virtualized environments. A container host is the nearest equivalent of a hypervisor that runs numerous container images. At its most basic function, a container host supports the deployment of container images and their starting and stopping. The container host routes the network traffic related to API calls to each container image. It monitors the utilization and performance of each container and can scale up or down additional container images vertically within a host or horizontally across multiple hosts. Multiple container hosts are known as a cluster. Every public cloud provider provides a proprietary container service, often compatible with the Kubernetes open standard. Additionally, Hybrid Cloud Technology providers like IBM and Red Hat provide the OpenShift Cloud Platform that provides a fully compliant Kubernetes container hosting and orchestration capability that can be deployed and operated consistently in any cloud provider location any data center. Red Hat’s Open Cloud Platform, OpenShift, incorporates Kubernetes container hosting and orchestration.
**A Service Mesh**: provides secure routing between microservices within a container host and between multiple hosts in a cluster. The mesh layer acts as an API traffic router. As API request traffic flows across the mesh, it can be logged or captured for analysis and observability. Istio is an open standard for the service mesh. Red Hat’s Open Cloud Platform, OpenShift, can integrate a service mesh.
**Multicloud API Networking Accelerators**: When containers are dispersed across multiple locations, edges, or clouds, connecting them safely can be challenging. Frameworks like Skupper have emerged to simplify interconnects, providing a virtualized, secure, and highly available API network across many locations. The benefits of a framework like Skupper include no changes to application code or configuration, inherent encryption and security protections, no need for VPNs or exposing ports on the internet. From a networking perspective, Skupper provides dynamic load balancing based upon service capability, cost and locality-based traffic forwarding, and redundant routing to protect network outages.
**A Container Repository**: simply a library of available container images, ready for deployment. The image is compliant with the Open Container Initiative: a digital library of first-party in-house container images and their configuration, or those which are open or proprietary sourced from third-party suppliers. Enterprises typically use private repositories that run under their control and protection. Red Hat’s Open Cloud Platform, OpenShift, incorporates a container repository. Frameworks like Skupper are beneficial in safely
exposing APIs on IBM zSystems to a broader platform ecosystem.
— A Container Scanner: akin to a virus scanner, a container scanner searches and identifies vulnerabilities in a container image and produces a report that enables a user to determine whether a container image is safe to deploy. Multiple vendors offer container scanning capabilities, broadly adopted in regulated industries. Some firms have a policy that a container image may not be deployed to a container repository or a container host until container scanning indicates that a container is safe.
— A Source Code Library: most developers are very familiar with source code control systems that provide a library where code and configuration artifacts can be stored, versioned, and retrieved. Any project with multiple developers working on the same code artifacts needs a library and control mechanisms. Just like in a book library, artifacts such as files are checked out and checked back in by registered members under the supervision of a librarian. Unlike a book library, code artifacts are typically changed between check-out and check-in. A person performing the librarian’s role can approve or decline changes made upon check-in, preventing bad changes from entering a codebase. The library enables extracts of any codebase version for review, compilation, building, and testing. The predominant open standard for the source code library is Git, initially created by Linux creator Linus Torvalds. While git is a genuinely open standard, multiple companies package it commercially.
— DevOps Pipeline: combines and integrates development and operational processes. Before DevOps, organizations separated development, build, test, release, and operations functions. Each SDLC step was distinct, discreet, and unintegrated from the previous or next steps. The predominant analogy was “throwing code over a wall.” This fire and forget software delivery method was fraught with skills transfer and knowledge issues and worsened the absence of automated hand-offs. DevOps address each of these issues head-on. DevOps works on the grounding principle that every step and hand-off of a development and release process is integrated and automated. The integration is provided by describing every build or configuration step in a machine-executable model. When infrastructure steps are built into a DevOps pipeline, this is known as “Infrastructure as Code,” as the executable model provisions the required infrastructure. Having a machine-executable model can be validated for compliance with security policies. Hence DevOps can be referred to DevSecOps or DevSecComplianceOps. The term pipeline is that the build, package, and release steps use multiple underlying tools such as compilers, linkers, packagers, and checkers. Each tool in the chain (toolchain) has a distinct function but operates in a coordinated and integrated way with its partners. There is a close relationship between the Source Code Library and a DevOps Pipeline. Committed changes to source code or configuration assets automatically invoke a DevOps pipeline to build and deliver code through an automated release process. This level of integration automation is referred to as “CI/CD” or Continuous Integration and Continuous Delivery. If changing only the code for a single microservice, the only deployment to be made following a release approval step will be the re-deployment of the container for the changed microservice. While a DevOps pipeline is often fully automated, regulated firms will use approval steps in a pipeline to ensure that essential scrutiny is given to code changes and release processes. Jenkins and Tekton are popular DevOps pipeline tools. Red Hat’s Open Cloud Platform, OpenShift, incorporates a DevOps pipeline.
— Site Reliability Engineering: Site reliability engineering (SRE) is a software engineering approach to IT operations. SRE teams use software frameworks like Ansible as a tool to manage systems, solve problems, and automate operational tasks. SRE takes the tasks that operations teams have historically done, often manually. Instead, it gives them to engineers or ops teams who use software and automation to solve problems and manage production systems. SRE is a valuable practice when creating scalable and highly reliable software systems. It helps you manage large systems through code, which is more scalable and sustainable for sysadmins managing thousands or thousands of machines. SRE teams are responsible for how code is deployed, configured, and monitored, as well as the availability, latency, change management, emergency response, and capacity management of services in production. Site reliability engineering helps teams to determine what new features can be launched and when by using service-level agreements (SLAs) to define the required reliability of the system through service-level indicators (SLI) and service-level objectives (SLO). With SRE, 100% reliability is not expected; failure is planned for and accepted. Recovery from failures and outages is automated.
— **Hybrid Databases and Stores:** database technologies continue to evolve. Relational databases are powerful but not best suited to every use case. For example, databases that support non-structured data are prevalent and are based upon NoSQL (Not Only SQL) principles. Popular NoSQL vendors include MongoDB and Cassandra. Graph databases represent entity-based relationships for specific analytical purposes, and Time-Series databases are popular for capturing and analyzing time-based data streams. Regardless of the database style, databases must participate in a hybrid ecosystem regarding their operations and how they can be called from APIs and microservices. Operationally, many databases are now offered as container images that can be deployed to a container host. The IBM Cloud Pak for Data offers multiple databases and connectors.
— **Agile Development Methods:** Conventional development was done using a familiar waterfall technique. The waterfall is mostly a burdensome serial process and can be rigid. While there are clear advantages to good discipline for software development, the limitations can outweigh the benefits. Agile fosters rapid design processes and delivers minimum viable capabilities. Agile techniques include recommendations for project size, work-effort structure, team size and structure, team communications, and project management.
### Existing Investments Must Participate
Few organizations can start a new development program that disregards incumbent technology choices or ongoing financial commitments. Generally, only start-ups have the short-lived luxury of a clean sheet. A mature organization has multi-year investments in technology platforms and physical or human assets such as data centers and operations teams. Data centers often represent billions of dollars of investment amortized over a facility’s twenty to thirty-year life. Hardware and software platforms are typically amortized between three and five years. While these factors seem obvious, many development programs aspire to create value or savings. Counter-intuitively, ROI calculations do not include incumbent costs. New developments increase the expense run-rate and only deliver run-rate savings when existing investments are eventually decommissioned. Financially responsible firms will seek to leverage existing investments within a newly established hybrid ecosystem.
Moving forward with addressing existing technical debt is challenging. Clients typically consider the following approaches when planning for application modernization. Each needs to be viewed through the lenses of financial and risk measures, predominantly geared towards assessing ROI and expense. Remember that transition is not just about making an application current today but how to best position it to remain current. This is why we recommend that most assets be transitioned using Iterative Modernization.
**Lift and Shift**
_Easy to Start. However, value creation is sub-optimized due to a primary focus on infrastructure._
Using modernized infrastructure without modernizing an application is the essence of lift and shift. Depending upon the technology platform, it may be possible to lift and shift an application into a virtualized environment or a small number of container images. Code typically runs ‘as-is’ on the new infrastructure.
Lifting and shifting potentially eliminate the platform benefits of the original hosting environment, such as availability, integrity, reliability, throughput, and security. Doing so can result in reduced service levels and increased outages. A chipset change between existing and new platforms can also result in incompatibility issues and the need to recompile aged code.
Lifting and shifting do not introduce new business capabilities, so it is simply a play to reduce the run rate cost of existing platforms and operations teams. In many cases, it is not successful in doing so.
**Rip and Replace with New**
_Requires Upfront Capital Expenditure. Customization creates risk._
Clients seduced by the agility of their emerging disruptive competitors often choose to build/buy a completely new system. However, this is usually done by ignoring or miscalculating an existing asset’s value. Determining what is ‘sunk cost’ and an
'underutilized asset' is challenging to evaluate. Appropriate time should be invested before making this choice to ensure the optimal ROI.
While it may be simple to buy a new platform, customization often sinks such projects. Most financial services firms demand extensive customization to cookie-cutter platforms. Customization is often seen as the final 5%, but anyone who has experienced a construction project knows that the last five percent of work consumes disproportionate cost and time. Customization is instrumental in introducing delivery or execution risk.
Iterative Modernization
Balancing investment with Value Creation. Preferred by most clients.
Clients looking to balance risk, ROI, and expenditure choose this approach. Iterative modernization involves introducing impactful gradual changes to existing code and systems. This aligns well with the overall business strategy embracing modernization as an ongoing investment.
In its most basic form, iterative modernization takes three styles that may be used separately or in combination:
- **Modify existing code:** depending upon the knowledge and understanding of existing code, this can be akin to open-heart surgery. It involves manipulating current code in its existing programming language and assumes that skills are readily available. A growing number of automated analysis and migration tools are becoming available to assist with code conversion and refactoring. IBM and others have applied AI-based machine learning to understand existing COBOL helping understand business logic, dependencies, and conversion to Java or an intermediate programming language.
- **On-platform modernization accelerators:** The introduction of new middleware can simplify interfacing applications.
- **The Extension of new code alongside existing code:** New code is created in current programming languages and is interfaced with existing core functions via APIs (RESTful or programmatic), messaging middleware, or database capture/synchronization. Providing the new code leverages effectively uses open source and containers, it can run in any destination chosen by the client.
ROI and Technical Debt represent seriously inconvenient truths, especially when ranked lower than technical criteria. Technologists enjoy pursuing new technologies and approaches without validating their ability to reduce operating expenses, simplify a business, or drive new revenue and profit. Financial considerations need to be at the heart of all technology strategies, including the hybrid approach.
IBM’s experience with clients has demonstrated a lift of 2.5 x in financial benefits from a hybrid platform approach over the economic levers of a public cloud-only deployment. Every business will need to consider ROI through industry and business business-specific lenses.
Adopting a modern computing approach provides demonstrable ROI improvements. Fundamentally running a hybrid or modernized environment ensures that enterprises can minimize the investment necessary to continue to benefit from existing assets while simultaneously competing on a level playing field with new entrants who are leveraging the latest innovations in technology.
Several essential factors in an ROI computation influenced by technical debt present challenges because the value is considered ‘soft’ or hard to assign. Two examples immediately come to mind:
- **Availability of Skills:** The availability of technical skills is a pressing challenge. The inability to attract and hire or procure appropriate skills can inhibit any planned transformation program. Skills may be necessary for two key reasons. First is the ability to understand code as it is currently implemented. On occasion, the only way to understand a codebase is via human investigation and expertise. Second, if a codebase needs to be updated or extended, experienced hands-on skills are likely required. The expense here does not relate to staffing resources. Instead, it relates to both costs of being unable to execute development activities and the dollar impact of missed business opportunities.
- **Cyber-Security Risk:** Aging code is prone to cyber-security issues, one of the existential topics for regulated firms. Using appropriate
technical skills, vulnerabilities in aging code need to be identified and mitigated, assuming that workarounds exist. If a codebase uses a proprietary library identified as 1) end of life and 2) containing a risk, a client may have no choice but to remove a dependency by replacing a library with an equivalent. While the cost of replacement activities is measurable, the potential cost of an incident is less easy to quantify. The case and ROI for making changes are compelling when quantified.
Because hybrid technology enables clients to make their investments fungible, interoperable, and substitutable, they can always make changes as technology evolves. Better ROI is possible both in the short term and over an extended period.
While this topic is critical, a very comprehensive view of technical debt and ROI is beyond the scope of this document.
04
Fit for Purpose Platform Decisions ↩
The assessment of whether a workload is deployed to a fit-for-purpose platform can be a very subjective topic. In this case, subjectivity is most often a factor of understanding and comfort. A very comfortable person in one technology domain may not understand why other technologies could be more appropriate, relevant, or simple.
An objective way to think about Fit-for-Purpose is to consider non-functional requirements (NFR), often known by architects as the ‘ilities.’ For a comprehensive overview of non-functional requirements, see https://en.wikipedia.org/wiki/Non-functional_requirement where 66 types of NFR are listed. See the attached graphic.
The specific significance of an NFR is that it is a requirement, not a platform characteristic. Each platform implements NFRs differently, if at all. Some platforms support NFRs inherently, thus simplifying a potential development burden. If a platform does not implement an NFR, but that NFR must still be supported, a developer needs to find platform-independent mechanisms, such as proprietary, open, or home-grown software techniques. Non-functional requirements are often determined as necessary through regulation. Where regulated, implementing an NFR is mandatory. One specific example is transactional integrity and consistency. Real-time consistency is currently not available in public cloud database services. Their loosely coupled architecture does not support real-time consistency. They offer “eventual consistency,” meaning that transactional updates may take time to replicate across nodes. A client must decide whether eventual consistency is sufficient for financial transactional data.
Linked to NFRs are specific functional requirements or workload characteristics. For example, a transactional workload such as a bank’s transactional ledger is particular. Still, it must also be delivered to support specific non-function requirements such as security, reliability, availability, recoverability, throughput, scale, integrity, consistency, and serviceability. On an enterprise server platform such as IBM zSystems, these non-functional requirements are inherently and fully supported by the Operating System and middleware. A commodity platform may not inherently implement all non-functional requirements, resulting in either a compromise to de-prioritize some NFRS or allocating additional cost to implement workaround approaches.
Data gravity, sovereignty, and transactional gravity are essential considerations in fit-for-purpose architecture and most often become considerations when ignored or misunderstood. Gravity implies that data retrieval and transaction updates are typically optimal when co-located. For example, if most of a firm’s core transactional data is on a server in Texas, processing that data in a cloud data center five hundred miles away will result in multiple side effects. The first side-effect will be latency due to network bandwidth considerations. Latency could render an application useless due to performance or throughput considerations. Secondly, data egress charges from a cloud provider create surprise bills. When data processing and database are co-located.
and readily accessible, data sovereignty, locality, and gravity considerations minimize latency, cost, and the number of copies of data in the enterprise. In other words, the platform(s) that host the largest bodies of transactions will draw business apps that need to extend or integrate closer to them. In the case of existing investments, most new business services need to integrate with existing business services – and, in many cases, within the same transactional scope. This is another critical consideration for workload placement.
The final consideration is whether a technology platform is a partial or complete participant in a hybrid cloud ecosystem. At its minimum, a platform needs to offer:
- Modern and legacy programming languages
- APIs and Microservice integration
- Containers and Container Orchestration
- A Standard visual development environment, source code library, and DevOps toolchain
- Ready access to inherent data services.
IBM’s perspective is that each of these is necessary, and each of our platforms, including IBM zSystems, POWER, and IBM Cloud, offer these capabilities consistently.
A hybrid cloud developer can access and gain value from the capabilities of IBM’s platforms without inherent underlying knowledge of each hardware or operating system platform. IBM can deliver these capabilities because of the availability of Red Hat and IBM’s hybrid cloud accelerator offerings.
For clients that decide that they want to adopt Hybrid Technologies, there may still be uncertainty about the need to move to the cloud. Specifically, some financial services clients have no intention of running workloads or storing data in a public or private service offered by a cloud provider. However, those same clients want the benefit of a hybrid cloud, specifically the acceleration of delivery and the adoption of standards-based software and frameworks on fit-for-purpose technology platforms.
Clients should not need to choose whether their environment is exclusively public, private, or on-premises. They will likely choose to run their workloads in multiple locations achieving all combined platform benefits. The point is not to choose only one way. Instead, the point is to adopt a technology platform that can be continuously evolved and optimized with business outcomes as the primary driver.
The plethora of technologies, frameworks, and standards in a hybrid ecosystem might suggest that it is complicated, challenging to learn, and hard to get started. When developers and operations staff need to be productive, complexity gets in the way and becomes an inhibitor to business acceleration. Another dimension to complexity is introduced when using multiple cloud providers, especially as a client locks into more and more proprietary services. Doing so creates a need to have specific framework skills in each cloud provider’s overarching frameworks and knowledge of their particular services.
Red Hat OpenShift and the Red Hat Open Cloud Platform deliver a consistent developer and operational experience regardless of where technology is imagined, developed, and delivered. OpenShift is often thought of primarily as the market-leading Kubernetes offering. The Red Hat OpenShift experience can be delivered everywhere, with product offerings for customer-managed environments or options to buy as a service from IBM, Red Hat, and cloud providers. The use of OpenShift is not limited to distributed x86 platforms. Its use on IBM POWER and IBM zSystems is proliferating.
However, minimizing OpenShift’s role to Kubernetes is too limiting. OpenShift also delivers a much broader developer experience that includes an ecosystem of integrated and open developer frameworks such as a security model, a DevOps toolchain, a container repository. Due to its complexity, security is often an afterthought. However, OpenShift reduces operational risk by integrating security directly into an automated DevOps using built-in policy templates that enforce a configuration’s adherence to security and compliance policies. Doing so protects application workloads at runtime.
In addition, Red Hat OpenShift is IBM’s hosting platform for the IBM Cloud Paks. Cloud Paks enable developers to accelerate application development. OpenShift allows developers and ops teams to host the Cloud Paks wherever data and processing are.
A side effect of workload portability is insulating and hedging technology risk. In brief, we could think about two pre-eminent risks, assuring resilience and facilitating workload portability. A consistent hybrid operating environment running across multiple locations enables business continuity when one site fails. Recently, this has been top of mind for enterprises who placed workloads with cloud providers that took significant outages. Organizations that could not automatically pivot to other workload processing locations went dark, unable to perform standard business functions or transactions. Additionally, legislation is emerging in Europe to ensure that firms do not deeply lock into a cloud provider. In an event where the situation, e.g., a regulatory mandate or a client decision, determines a need to move workload, the hybrid technology platform enables a client to remove workload and re-host it from one cloud provider to another location, either on-prem, a hosting center, or an alternate cloud provider.
There is no question that IBM’s clients for the mainframe need it to be a dynamic, vibrant, and innovative hybrid technology platform. Despite constant propaganda from IBM’s competitors, clients need IBM zSystems’ extreme reliability, security, throughput, integrity, and availability for their mission-critical core product-processing transactional systems.
While IBM zSystems’ hardware and operating systems are vibrant and highly contemporary, many third-party application platforms are not current. Because mainframe systems run predictably for decades, independent software vendors and clients have consistently under-invested in their application platforms, resulting in technical debt.
Two operating systems, z/OS and Linux, are full participants in a hybrid ecosystem for the IBM zSystems environment. As clients build new digital experiences for consumers that access IBM zSystems data and applications, they can build once and run them anywhere. They can tap into the Red Hat OpenShift platform and use containers to modernize with lower risk and lower cost without sacrificing resiliency and security. The availability of containerization on IBM zSystems enables the co-location of cloud-native applications on the same system hosting enterprise data, reducing the cores needed by up to 3.6 times compared to using remotely connected x86.
IBM and our partners provide technologies to help modernize the application platforms on the mainframe to embrace a hybrid future.
There’s an obvious reason for the mainframe to be an intense source of value for clients and equally a target for IBM’s competitors: the most crucial...
product transaction data for large enterprises typically sits in mainframe databases driving data gravity considerations. Competitors primarily target moving data and processing from the mainframe but often do so at a reduced quality of service and increased risk. Projects typically focus on replicating or extracting mainframe data via ETL and data movement. Such an approach creates increased cost from mainframe utilization, increased cost from mainframe data, and results in multiple ‘shadow’ copies of mainframe data away from the transactional core.
In comparison, IBM’s offerings provide the following platform capabilities that seamlessly integrate IBM zSystems as a first-class participant in a hybrid cloud ecosystem:
- The IBM zSystems Digital Integration Hub provides secured virtualized and cached access to mainframe data via SQL, ODBC, and JDBC.
- z/OS Connect exposes mainframe transactions through API calls
- IBM z/OS Container Extensions (zCX) enable a Docker container to host a microservice that co-mingles both Linux and Z/OS functionality
- Red Hat OpenShift provides Kubernetes capability in both Linux and Z/OS
- IBM zSystems’ developer tooling and programming language coverage provide complete equivalence to distributed platforms.
- DevOps platform components such as Git and Jenkins are readily available on IBM zSystems. An IBM zSystems build and deployment target can be completely transparent to a developer.
- Analytical tooling from IBM and our partners helps use AI to analyze existing COBOL and Assembler codes. The analysis stage is essential before remediation.
Finally, IBM added support to spin-up on-demand a z/OS dev or test environment within the IBM Cloud in minutes. This option provides clients with new agility to accelerate the development process.
Modernization Approaches for Software Platforms on IBM zSystems
We previously discussed Lift & Shift, Iterate & Extend, and Buy New as modernization approaches. Each of these can apply to application platforms on IBM zSystems.
Iterating and extending is the primary modernization approach adopted by enterprises running IBM zSystems. This approach has six variants.
**Expose mainframe data to new analytic or business functions:** the center of gravity for enterprise data in most enterprises is the mainframe. Hybrid analytics and AI platforms such as the IBM Cloud Pak for Data can access data sources via SQL or JDBC connectors. Mainframe data was traditionally considered to be locked away and hard to access. One approach to work around a perceived lock is to extract data and move it. However, we'd argue that a better way is secured virtualized access to mainframe data via the z/Digital Integration Hub.
**Exposing existing transactions as microservices via APIs:** z/OS Connect exposes existing mainframe transactions via API calls. This is the simplest way to extend current code into the hybrid technology platform alongside virtualized data access. Invoking an API from another microservice executes the transaction mapped to the API call. The transactional response is returned in a JSON format.
**Adding new Code in a current code base without modernization:** this is a viable approach for clients with a great understanding of their existing core platform. Even with code analysis, adding new capabilities can be highly complex for those who don't have great insight into their current state. Changing code without modernization does not typically de-couple monolithic user interface and data functions.
**De-Couple Monolithic Code and Containerize:** The most compelling approach to minimizing technical debt and creating future value is to refactor existing code for containerization. Analytic tools are essential to understanding how to demarcate functional and transactional boundaries. Automated migration assistants are also valuable. The containerization approach does not mandate a change of programming language from COBOL, although doing so might be desirable.
**Extending new business functions through language interoperability:** The ability for a Java application to programatically interface with COBOL and PLI assets within the same unit of work. This enables new business services to be easily added in Java, sitting right alongside COBOL and PLI business services, all managed by IBM zSystems.
**Extending new business function alongside an existing codebase:** The extend approach bridges existing code and new code. Several mechanisms bridge existing and extended code, such as the SQL or event-driven capabilities of the zSystems Digital Integration Hub, a messaging approach such as MQ, or maybe a data replication approach via the capture of changes to a database (change data capture). Given data gravity considerations, it will likely be optimal to host an extended function on the same IBM zSystems server, a Linux or OpenShift environment. Extending is typically done because an objective analysis of existing code demonstrates its re-factoring to be complex. There are considerations about new data created by an extended function. For simplicity, it is recommended that, where possible, new tables and fields are added to existing databases. However, this may not always be possible.
As already explored, the center of gravity for data and processing is a compelling reason enough to co-locate the code supporting new digital client experiences on IBM zSystems.
Organizations have avoided doing modernization for its own sake, believing that the ROI is not compelling. Given this, the catalyst for undertaking a modernization initiative is motivated by a combination of delivering what needs to get delivered, for example, a new digital experience, and how it needs to be delivered, the how being non-functional requirements and fit-for-purpose decisions.
While a detailed examination of use-cases is a topic for another occasion, two patterns outlined here:
**Exposing new digital channels that leverage IBM zSystems databases and transactions**
One intense focus area for accelerating digital transformation has been the innovation of client experience. The consumerization of IT is a ubiquitous theme as customers demand a customer experience that is aligned with their favorite mobile apps or in-store experiences. Simple examples center on anticipating a customer’s likely need, understanding their relationship history, and removing friction from every part of their customer journey. In many industries, digital customer experiences are reliant on information in a transactional core system of record. The public cloud provider’s recommendation has been to remove data from the mainframe in an attempt to shift the center of gravity. This has resulted in data movement charges, multiple copies of data, security risks, and data inconsistency. The reason for wanting to move the data was also based on a perception that it is hard to access. By leveraging new zSystems capabilities like the zSystems Digital Integration Hub, developers can securely access curated and virtualized views of raw transactional data through standard SQL APIs or as injected events into an event-driven architecture. From new digital channels, developers can write back to mainframe transactions through APIs or SQL. This is a terrific compromise as data on the mainframe is no longer locked from developers, and security and controls have not been abandoned.
**Delivering AI inference inside an existing transaction on IBM zSystems**
AI-infused decision-making has become prevalent in almost every business process. Firms are using AI for three reasons:
- Understand context and intent through natural language processing
- Create insight to identify a business opportunity or optimize an operation
- Detect risk or behavior associated with fraud, money laundering, or other crimes.
However, until recently, executing a machine learning model was typically not done within an on-platform transactional scope. AI was executed in one of two ways. First, it was offloaded to an off-platform service, often hosted in a public cloud. Doing so could introduce a second of latency into a credit card authorization which would break the transactional SLAs. Second, AI was performed after the fact, after the closure of a transaction, resulting in uncaught activity that might incur a financial liability.
The IBM Telum processor on the IBM Z16,
announced in April 2022, embeds AI capabilities directly onto the mainframe processor. This can enable customers to leverage the results of AI inference to better control the outcome of transactions before they complete.
For example, leveraging AI for risk mitigation in Clearing & Settlement applications to predict which trades or transactions have high risk exposures and to propose solutions for a more efficient settlement process. A more expedited remediation of questionable transactions can help clients prevent costly consequences and negative business impact.
For instance, an international bank uses AI on IBM zSystems as part of their credit card authorization process instead of using an off-platform inference solution. As a result, the bank can detect fraud during its credit card transaction authorization processing. For the future, this client is looking to attain sub-millisecond response times, exploiting complex deep learning AI models, while maintaining the critical scale and throughput needed to score up to 100,000 transactions per second, nearly a 10X increase over what they can achieve today. The client wants consistent and reliable inference response times, with low millisecond latency to examine every transaction for fraud.
AI-OPS. Hybrid cloud architectures are de-coupled and can run in multiple places. Arguably, monitoring hybrid application deployments becomes too complicated for a human operator, given the number of sites that code runs and the number of interfaces between code modules. AI-infused monitoring can keep track of instrumentation and health data emitted by each hybrid environment component. AI can understand related and unrelated events patterns and correlate them to identify non-obvious patterns and relationships. AI fuels observability, in other words, the ability to gain a deep understanding of code execution across multiple processing locations, understand bottlenecks and errors. AI can suggest how to remediate code issues. IBM’s Instana offering provides hybrid native application observability and performance management. Additionally, clients wish to optimize their resource consumption to ensure they pay for only the external services needed to support business objectives. IBM’s Turbonomic offering uses AI to recommend and execute configuration changes that support business objectives to grow or compress environments optimizing external spending with a cloud provider.
IBM’s definition of the hybrid stack includes AI. Why? While it may seem counter-intuitive or overstated, there are two clear reasons.
Firstly, IBM asserts that all applications will benefit from AI-infused decisioning and automation capabilities. Being able to adopt these techniques by invoking microservices are remote or co-located with data and other processing. For clients, the most critical decision is choosing where AI services are trained and executed. Machine Learning training often relies on highly sensitive data that cannot leave a client’s production environment. Knowing that a cloud provider requires data movement into their environment to complete model training activities becomes a non-starter. Secondly, a client may not wish to incur the latency of calling off-platform to an external ML model execution service during a low-latency fraud check. AI capabilities need to be available in hybrid building blocks as every new application will consume AI services.
Second, next-generation monitoring and operational automation systems require AI-infused operations or AI-OPS. Hybrid cloud architectures are de-coupled and can run in multiple places. Arguably, monitoring hybrid application deployments becomes too complicated for a human operator, given the number of sites that code runs and the number of interfaces between code modules. AI-infused monitoring can keep track of instrumentation and health data emitted by each hybrid environment component. AI can understand related and unrelated events patterns and correlate them to identify non-obvious patterns and relationships. AI fuels observability, in other words, the ability to gain a deep understanding of code execution across multiple processing locations, understand bottlenecks and errors. AI can suggest how to remediate code issues. IBM’s Instana offering provides hybrid native application observability and performance management. Additionally, clients wish to optimize their resource consumption to ensure they pay for only the external services needed to support business objectives. IBM’s Turbonomic offering uses AI to recommend and execute configuration changes that support business objectives to grow or compress environments optimizing external spending with a cloud provider.
A key consideration for clients modernizing their workloads and leveraging the public cloud is extending their security perimeter following zero-trust principles to encompass the hybrid cloud paradigm while not compromising regulatory or security requirements. The following section discusses the critical aspects of designing a secure cloud environment and shows how IBM Cloud provides a set of capabilities that helps accelerate the transformation. The following table highlights the core design principles that underlie implementing a secure hybrid cloud environment.
IBM Cloud has been designed with the exacting demands of the world’s largest and most complex organizations in mind. It relies on the cryptographic technologies shown to be most effective in the financial industry. Data that a client store on IBM Cloud belongs only to that client and can only be accessed by them. Clients can bring and keep their encryption key that no one else can see—not even IBM—and can build and run core business applications and workloads with single-dashboard visibility and multiplatform portability.
A key differentiator that IBM Cloud provides for Financial Services is the IBM Cloud Framework for Financial Services, a purpose-built framework with specific controls that address the unique risks of the financial services industry. The Framework was
<table>
<thead>
<tr>
<th>Defense in Depth</th>
<th>Cloud infrastructure will provide multiple, redundant layers of security safeguards to prevent compromise of the service from a single point of attack.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Restricted Privileged Rights</td>
<td>No individual should be given enough privileges to misuse a system on their own and should be granted the minimum required authorizations to perform their activities.</td>
</tr>
<tr>
<td>Safeguard Data</td>
<td>Data is a valuable asset that needs to be protected from unauthorized disclosure, modification and destruction.</td>
</tr>
<tr>
<td>Continuous Controls Assurance</td>
<td>Security controls must be configured securely by default through automation and checked continuously for compliance to provide continued controls assurance.</td>
</tr>
<tr>
<td>Detection and Response</td>
<td>Enable traceability through logging, monitoring, alerting, and collection of audit information in real time.</td>
</tr>
<tr>
<td>Service Resilience</td>
<td>Availability of services and data are critical to operation of business applications, and they will incorporate multiple levels of resilience to maintain cloud services even after multiple component failures.</td>
</tr>
<tr>
<td>Secure by Design and SW Integrity</td>
<td>Follow secure development/operations processes and ensure software integrity through automation.</td>
</tr>
</tbody>
</table>
designed in collaboration with leading banks and with Promontory, an IBM company and a global leader in financial services regulatory compliance advisory services. The Framework enables an approach that delivers industry-specific risk-centric controls at the intersection of business and technology. It provides a standard set of controls spanning financial institutions, ISV / SaaS providers, and IBM Cloud Services (see Figure 2 below).
The IBM Framework enables the IBM Cloud for Financial Services to stay current as regulations evolve. This helps clients minimize the burden of regulatory obligations. Through the work with Promontory, the IBM Framework is intended to consider the regulatory requirements of financial institutions from over 75 financial services regulators in 24 countries, including nations in Europe, the Americas, and Asia-Pacific. The focus is on those requirements targeted directly at cloud usage and those regulations applicable more broadly to third-party risk management, cybersecurity, and data privacy. Regulations from various regulatory authorities, including those listed in Figure 3, inform and influence the framework and associated controls.
End-to-end data encryption with extensive control: IBM offers the industry’s strongest, commercially available, state-of-the-art cryptographic technology with IBM Cloud Hyper Protect Crypto Services. This service provides the unique “keep your own key” (KYOK) capability, based on FIPS 140-2 Level 4 certification, giving clients the ability to retain control of their own encryption keys and the hardware-security modules (HSMs) that protect them. IBM requires that ISV and SaaS providers agree to encrypt data at rest using KYOK. Unauthorized parties, including IBM Cloud for Financial Services personnel, will never have access to customer encryption keys. Whenever a customer application encrypts data with those keys, no other parties will have access to that customer’s data.
Workload-centric security by default: Each workload requires various access and security rules. IBM enables organizations to define and enforce such guidelines by way of integrated container security and DevSecOps for cloud-native applications with Red Hat OpenShift® as a service. Continuous security and compliance: IBM Cloud Security and Compliance Center provides a unified experience to view and manage security and compliance postures. Security, compliance, and operations teams can quickly identify security risks and vulnerabilities, govern cloud resource configurations, and centrally manage compliance with their organization’s and regulatory guidelines.
Multi-Zone Regions (MZR) leverage the underlying capabilities of IBM Cloud for Financial Services to enhance business resiliency and disaster recovery. MZRs comprise multiple, high-speed, low-latency,
interconnected Availability Zones independent from each other to ensure that single-failure events impact only a single Availability Zone. They enable financial institutions to locate workloads in specific geographies to fit their needs.
**Isolation and segmentation** provide compute isolation and network segmentation capabilities—meaning workloads can be deployed and managed with private-cloud-level security within a public cloud model. Compute isolation offers dedicated servers for cloud-native and VMware workloads, mitigating concerns around shared compute. With software-defined networking constructs, workloads and applications can be deployed within segmented network Availability Zones and secure connectivity across hybrid deployments.
**Prescriptive control implementation** reflects the targeted security and compliance requirements that should be met. For a standardized set of controls to be consistently implemented, assessed, monitored, and remediated, the control requirement must be prescriptive to minimize misinterpretations or incorrect implementations. Prescriptive controls help achieve the consistent and repeatable implementation of those controls across diverse technical environments.
**Logging and auditing rules** require that SaaS and ISV providers log all actions taken through the cloud portal, API, or command-line interface to be recorded in detail using IBM Cloud Activity Tracker. This provides standard logging of activity on systems and services and full session recording of precisely what actions operators take. This information is centrally stored and analyzed. The logging process is auditable to enable tracing of all steps, including logging both successful and unsuccessful events and giving role-based protection at all intervention points. The access logs are stored along with timestamps to assist in analysis and forensics.
**IBM Cloud Security and Compliance Center:** To aid in preventing compliance drift from the IBM Framework, IBM provides leading-edge tools which help client cloud administrators and application developers mitigate risk and manage compliance. These administrators and developers can automate manual processes with these tools. Enterprise security and compliance policies can be expressed in terms of controls and a demonstrable set of implementation goals. These elements can be composed into a collection of profiles, including NIST 800-53, applicable to financial institution workloads and applications.
**IBM Framework for financial services audit:** Security and risk executives (CISOs and CROs) and the managers of financial institutions using the IBM Cloud for Financial Services can gain efficiency in their internal and regulatory audits with the IBM Framework for Financial Services audit report. An independent third-party assessor performs periodic, rigorous assessments of the IBM Cloud for Financial Services against the IBM Framework. These assessments are more extensive than typical system and organization controls (SOC) 2 audits. They benefit clients by offering more visibility and transparency into the control effectiveness.
No discussion about IBM’s hybrid cloud capabilities is complete without considering how to accelerate application development and operations with the IBM Cloud Paks. Most applications consume higher-level capabilities to insulate a development process from becoming overly custom and increase the application delivery rate and pace. All Cloud Paks are designed to be deployed on Red Hat OpenShift in any location. Some are offered on a fully managed Software as a Service basis hosted in multiple enterprise cloud providers, including AWS, Microsoft Azure, and IBM Cloud.
Cloud Paks play an essential role in accelerating application development to support a client’s use-cases. They sit between the Hybrid Cloud Platform running on OpenShift and the client’s application-specific microservice containers. Cloud Paks share a common operational framework, and their business functions can be invoked via API calls.
Cloud Pak® for Data
Accelerates the end-to-end data lifecycle, including databases, data governance, ETL, reporting, data virtualization, natural language processing, virtual agents, and trustworthy AI.
Cloud Pak® for Business Automation
Cloud Pak® for Network Automation
Automate networks to deliver zero-touch operations.
Cloud Pak® for Security
Integrates security events to create predictive incident insights and assist with response and mitigation.
Cloud Pak® for Integration
Offers cloud-native messaging middleware, message brokering, connectivity and protocol brokering, event streaming, data integration, API management, and a security gateway.
Cloud Pak® for Watson AIOps
Offers observability with Instana, automation, and insight with Watson AIOps, optimization with Turbonomic.
Regulated industries are given specific regulations about how and where they run workloads. For example, governments assert that data, and in some cases, processing and operational staffing must be performed inside a geographic boundary. This is resulting in the emergence of issues of “sovereignty.” These requirements become burdensome for firms as they require:
- Partitioning of a data environment with locality-specific security and access controls
- The availability or absence of a trusted cloud provider within a geographic boundary
- Can data be processed remotely from its storage? Can information move across geographic boundaries?
- In-country infrastructure for both storage and processing
- Local staffing for technology and potentially business operations
Hybrid accelerators such as IBM Cloud Satellite alleviate these concerns by providing an integrated and automated hybrid environment that can be deployed on an as-a-service basis anywhere, in any cloud, or in any hosting environment.
IBM Cloud Satellite is a fundamental hybrid enabler providing managed distributed cloud offering based on the same open-source technologies as the IBM public cloud. Satellite distributes and manages cloud services anywhere needed — on-premises, in other vendor clouds, or at edge sites. IBM Cloud serves as the base for Satellite distributions; it abstracts complexities away from the individual locations while it provides a single, secured view of the public cloud where distributed services are observed and managed. Satellite enables users to manage cloud services and applications across public and private environments, including other vendor clouds. Satellite’s flexibility in integrating resources across environments extends to infrastructure, as-a-service operations, secure connectivity, and application lifecycle management.
A Satellite location comprises hosts — essentially, Red Hat OpenShift clusters on Linux hosts deployed on VMs or bare metal servers. Unlike other distributed cloud vendors, Satellite lets users pick the infrastructure they want to leverage, including:
- Already existing infrastructure in an on-premises data center or vendor colocation
- Already existing infrastructure in a private or public cloud environment
- In IBM Cloud or another cloud vendor
- Infrastructure as a service set up and fully managed by IBM
- An appliance that builds and runs IBM Cloud Satellite automatically
For example, let’s say a team has a Satellite location in an on-premises data center in Phoenix where applications are running in containers on a fully managed Red Hat OpenShift cluster. These application workloads are entirely self-contained and can be restricted to serve only serve local requests.
The ops team that operates these clusters uses a monitoring and control dashboard hosted on the nearest IBM Cloud data center – in this case, Dallas. From a dashboard, the engineer can spin up new OpenShift clusters in the Phoenix satellite location, deploy apps into the existing clusters, integrate services like logging and monitoring, and run the playbook of day two operations tasks. In short, Satellite extends services available on IBM Cloud into a company’s private data center. The apps in the Phoenix data center are self-contained, so they run without needing to communicate back to the IBM Cloud data center in Dallas. At the same time, the ops team consistently views and manages core cloud application services in IBM Cloud.
What if you need to establish another location? Like many IT teams in large organizations, let’s assume your team uses multiple cloud providers. A pool of infrastructure set up in a public cloud provider’s data center in Chicago is added as a Satellite location using the IBM Cloud portal. Once configured, the portal is used to create and control resources in the Satellite location. A Satellite location is similar to a self-contained IBM Cloud region running IBM Cloud services.
As a result, Satellite consistently delivers the same user experience everywhere. Satellite reduces the complexity of running workloads on multiple clouds and integrates them. Since IBM Site Reliability Engineering (SRE) takes care of the lifecycle for services, including updates and patches, developers get relief from tedious and repetitive tasks. They remain focused on more quickly achieving primary business objectives.
In Conclusion
Hybrid Cloud platforms are here to stay and help businesses accelerate time to value and customer satisfaction. They are grounded in well-formed architectural principles that are intentionally open and resistant to lock-in. Innovation and standards will inevitably continue to evolve. However, clients shouldn’t wait to get started. IBM brings four accelerating capabilities to support our client’s Hybrid journeys:
- Our most comprehensive open hybrid capabilities include the Red Hat Open Cloud Platform with OpenShift, IBM Cloud Paks, and server platforms like IBM zSystems and POWER. Our media run on every fit-for-purpose platform, including our client premises, hosting centers, Edge environments, and any Cloud provider.
- A breakthrough innovation method with the IBM Client Engineering Team and IBM Elite Team focused on offering an agile breakthrough approach rooted in Design Thinking, Agile Software Development leveraging IBM Technology capabilities.
- IBM Consulting with thousands of hybrid cloud modernization and modernization engagement leveraging Hybrid Technologies and AI.
- IBM’s expanding ecosystem of software vendors, application development, infrastructure, and consulting partners.
Please speak to your IBM Representative to get started or visit https://www.ibm.com/cloud/hybrid.
About the Author
John J Duigenan is the Global Industry Leader, Financial Services, IBM Technology. Additionally, he is an IBM Distinguished Engineer. He can be contacted at either John.Duigenan@us.ibm.com or https://www.linkedin.com/in/duigenan
|
{"Source-Url": "https://www.ibm.com/downloads/cas/E4JND239", "len_cl100k_base": 12155, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 61176, "total-output-tokens": 13147, "length": "2e13", "weborganizer": {"__label__adult": 0.0009450912475585938, "__label__art_design": 0.0014677047729492188, "__label__crime_law": 0.001186370849609375, "__label__education_jobs": 0.002899169921875, "__label__entertainment": 0.0004482269287109375, "__label__fashion_beauty": 0.0005178451538085938, "__label__finance_business": 0.315185546875, "__label__food_dining": 0.0007395744323730469, "__label__games": 0.0019273757934570312, "__label__hardware": 0.0052490234375, "__label__health": 0.0012178421020507812, "__label__history": 0.0006461143493652344, "__label__home_hobbies": 0.0005054473876953125, "__label__industrial": 0.0035037994384765625, "__label__literature": 0.0005211830139160156, "__label__politics": 0.0007266998291015625, "__label__religion": 0.0006990432739257812, "__label__science_tech": 0.1016845703125, "__label__social_life": 0.0002434253692626953, "__label__software": 0.10870361328125, "__label__software_dev": 0.447998046875, "__label__sports_fitness": 0.000614166259765625, "__label__transportation": 0.0018291473388671875, "__label__travel": 0.0006537437438964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67580, 0.00759]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67580, 0.07841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67580, 0.91968]], "google_gemma-3-12b-it_contains_pii": [[0, 110, false], [110, 3712, null], [3712, 5141, null], [5141, 8184, null], [8184, 13222, null], [13222, 18295, null], [18295, 20689, null], [20689, 22575, null], [22575, 24720, null], [24720, 26826, null], [26826, 28382, null], [28382, 30903, null], [30903, 33238, null], [33238, 36290, null], [36290, 37928, null], [37928, 39735, null], [39735, 43172, null], [43172, 46319, null], [46319, 51047, null], [51047, 53766, null], [53766, 56600, null], [56600, 59729, null], [59729, 61625, null], [61625, 64357, null], [64357, 66006, null], [66006, 67580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 110, true], [110, 3712, null], [3712, 5141, null], [5141, 8184, null], [8184, 13222, null], [13222, 18295, null], [18295, 20689, null], [20689, 22575, null], [22575, 24720, null], [24720, 26826, null], [26826, 28382, null], [28382, 30903, null], [30903, 33238, null], [33238, 36290, null], [36290, 37928, null], [37928, 39735, null], [39735, 43172, null], [43172, 46319, null], [46319, 51047, null], [51047, 53766, null], [53766, 56600, null], [56600, 59729, null], [59729, 61625, null], [61625, 64357, null], [64357, 66006, null], [66006, 67580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67580, null]], "pdf_page_numbers": [[0, 110, 1], [110, 3712, 2], [3712, 5141, 3], [5141, 8184, 4], [8184, 13222, 5], [13222, 18295, 6], [18295, 20689, 7], [20689, 22575, 8], [22575, 24720, 9], [24720, 26826, 10], [26826, 28382, 11], [28382, 30903, 12], [30903, 33238, 13], [33238, 36290, 14], [36290, 37928, 15], [37928, 39735, 16], [39735, 43172, 17], [43172, 46319, 18], [46319, 51047, 19], [51047, 53766, 20], [53766, 56600, 21], [56600, 59729, 22], [59729, 61625, 23], [61625, 64357, 24], [64357, 66006, 25], [66006, 67580, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67580, 0.03922]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
6fdfc35fbd64476c37dfac0a371f16091ab80f1d
|
TRANSITIONING AN ITS DEVELOPED FOR SCHOOLHOUSE USE TO THE FLEET: TAO ITS, A CASE STUDY
Dick Stottler
Stottler Henke Associates, Inc.
San Mateo, CA.
Nancy Harmon
MARCORSYSCOM, PMTRASYS
HarmonNJ@navair.navy.mil, 407-380-4003
Phil Michalak
Stottler Henke Associates, Inc.
San Mateo, CA.
Abstract
This paper describes our experiences in transitioning the Tactical Action Officer Intelligent Tutoring System (TAO ITS), designed and developed specifically for use by students at the Navy’s Surface Warfare Officers School (SWOS), to fleet use. PMS-430 recognized that while they were fulfilling the needs of integrated team training, the Battle Force Tactical Trainer system required a major portion of the shipboard Combat Information Center (CIC) to be manned in order for the TAO to practice tactical decision making. Experts and instructors agree that the most important factor for maintaining a TAO’s tactical decision-making skill is the opportunity to practice making decisions and timely feedback. SWOS has found that the TAO ITS increased the amount of such practice by ten times. Both PMS-430 and SWOS have deemed it beneficial to transition the TAO ITS to the fleet for shipboard use. The TAO ITS and the benefits realized by students at SWOS are described in [Stottler and Vinkavich 2000].
Transitioning the TAO ITS to shipboard use would realize several benefits. Since TAO ITS is PC based and requires no extra human players nor support personnel, it enables TAOs and prospective TAOs much greater opportunities to practice their tactical decision-making skills anytime/anywhere. One of the primary limitations to free-play simulated scenario training out in the field or onboard ship is the need for evaluation of the student’s actions. Tactical decision-making practice is almost worthless without knowing whether the decisions were good or bad. The TAO ITS provides automatic debriefing capabilities, giving the student the important feedback as to which decisions were made correctly versus the omitted or bad ones.
There were several considerations in planning the transition of the TAO ITS to fleet use due to the differences between SWOS and the ship’s environment and mission. Individual ships would want to train the TAOs with data specific to their ship and with scenarios appropriate for their geographical area. The TAO ITS already possessed this ability, but the existing interface was built to be used by only a handful of SWOS instructors. These capabilities had to be made far more user-friendly. In the schoolhouse, both instructors and documentation were available for students if they needed additional information. The shipboard version of TAO ITS would have to include this information.
The TAO ITS was alpha-released to the Fleet in January and beta-released in April, 2001. Recommended enhancements were made, and it will be released for general fleet use in August, 2001. The results and lessons learned from this process are described in this paper.
Bibliographic Sketches
Dick Stottler co-founded Stottler Henke Associates, Inc. (SHAI), an artificial intelligence consulting firm in San Mateo, California in 1988 and has been the president and CEO of the company since then. He has been principal investigator on a number of intelligent tutoring system projects conducted by SHAI including the Tactical Action Officer Intelligent Tutoring System. Currently, he is working on the transition of the TAO ITS for fleet use and on an intelligent tutoring system to teach armored company commander decision-making for STRICOM. He has a Master's Degree in computer science with a concentration in artificial intelligence from Stanford University.
Nancy Harmon is presently a Program Officer for MARCORSYSCOM, PMTRASYS focusing on Combined Arms Staff Training and Deployable Virtual Training Environments. Prior to this assignment, she was a Program Manager with the Naval Air Warfare Center, Aviation and Surface Directorates, focusing on training systems development for Navy and Marine Corps Aviation, and shipboard/shorebased training technology migration for the Surface Navy Combat Systems. She has been employed by the US Government for 27 years, 11 years in Program Management and 16 years in the contracts department.
Phillip Michalak has worked on a number of Intelligent Tutoring System (ITS) projects as an employee of Stottler Henke Associates, Inc. (SHAI). His current ITS duties include the lead software engineer role for an adult literacy ITS, and all of the software engineering and maintenance responsibilities for the Tactical Action Officer Intelligent Tutoring System (TAO ITS). He holds a Bachelor's Degree in computer science from Carnegie Mellon University.
TRANSITIONING AN ITS
DEVELOPED FOR SCHOOLHOUSE USE TO THE FLEET: TAO ITS, A
CASE STUDY
(Stottler, Harmon, Michalak)
PROBLEM DESCRIPTION
Expert Tactical Action Officers (TAOs) are a high value commodity because they make high-value decisions including use of the ship's weapons systems. Expert TAOs and instructors believe that the most important parameter for gauging the expertise of a TAO is the amount of tactical decision-making practice that he has had. One expert instructor stated, “The difference between a good TAO and a great TAO is tactical experience.” To greatly increase this tactical experience requires significantly more time in tactical warfare situations. Training in tactical scenarios has typically required expensive hardware and a large number of support personnel to play various roles in the simulation and to evaluate the TAO's performance. To reduce the cost and increase the accessibility of tactical training for TAOs, SWOS required a new training system that would run on a low-cost (PC) platform and eliminate the support personnel. It had to be highly portable, personal, and standalone. It needed to be highly configurable and maintainable by the tactical experts themselves. The goal was to greatly increase the amount of tactical decision-making SWOS TAO students would experience.
BACKGROUND
General Intelligent Tutoring System (ITS) Description
The most effective way for students to learn is to work individually, face-to-face, with a qualified tutor, well equipped with instructional material, lab equipment, and so on. As shown (in Figure 1), studies have shown that individually tutored students perform two standard deviations better than students only receiving classroom instruction. When learning a vocational skill, a student would also practice with real equipment used on the job, and interact with other members of the future work team or with a realistic physical simulation of the work situation if safety forbade actual practice. The teacher could then tailor the teaching approach to the student's speed of learning and performance; the proper technique could immediately be demonstrated when the student made errors, and gaps in the student's prerequisite knowledge could quickly be detected and remedied. Unfortunately, expert instructors are a scarce resource, and buildings and equipment are expensive, making this preferred form of instruction costly and rare.
Figure 1. Tutored Student Achievement versus Classroom Student Achievement.
The goal of an intelligent tutoring system (ITS) is to provide a learning experience for each student that approaches the standard of learning received with one-on-one tutoring from an expert teacher equipped with all necessary training aids. ITSs use artificial intelligence techniques to adaptively make both "how to teach" and "what to teach" decisions appropriate to each individual student during a course.
To achieve its goal, ITS software monitors each student's interactions, and builds a "student model" for each individual. This model comprises the student's performance on training and remediation exercises; knowledge of all the information and remediation received; the knowledge mastered, failed, unknown, and misunderstood by the student; and the student's learning style. As an expert teacher who works one-on-one with a particular student would, the software develops an effective teaching style customized to each student. In other words, an ITS emulates an expert teacher teaching one-on-one in a particular subject.
While the evidence is still limited but very positive, ITSs have the potential to create a revolution in effective, low cost education and training. To date, computers have played a marginal role as compared to the central role of teachers. ITSs will ensure that computers support trainers and help students to learn much more efficiently.
ITSs are particularly effective when the preferred mode of instruction is "learning by doing." The best ITSs embrace this philosophy, and most use simulations on PCs for exercises, avoiding the expense of physical equipment in many cases. Most traditional computer based training (CBT) systems are simply electronic page turners enhanced with hyperlinks and multimedia and multiple choice questions. Even the best CBT that provides "learning by doing" cannot provide the individual feedback students require without a high teacher/student ratio. Intelligent tutoring systems can enable adult students to quickly gain the expertise that otherwise might require years of on-the-job experience. Extensive studies supported by the U.S. government have shown clear superiority of intelligent tutoring systems over ordinary computer-based training.
Description of the TAO ITS
The following summary description is excerpted from [Stottler and Vinkavich 2000]. (See that paper for more details.) The TAO ITS is a simulation-based intelligent tutoring system designed to run on a PC. It has been used at the Navy's Surface Warfare Officer's School (SWOS) in Rhode Island since early 1999. As well as being a powerful assistant to the classroom instructor, the TAO ITS's advanced capabilities as an "electronic teacher" enable a student to use simulations for learning on his own, anytime, anyplace.
The TAO ITS was designed to provide tactical action officer students at SWOS with realistic, practice-based instruction and individualized feedback. A tactical action officer controls his ship's sensors and weapons and directs the movements of the ship and other support vessels and aircraft. The TAO also monitors the movements and actions of friendly and enemy ships, planes, missiles, and submarines in the region. The TAO integrates this information in real time to form a dynamic tactical picture, selects appropriate responses, and issues orders.
The TAO ITS allows students to act as TAOs in simulated scenarios and receive individual feedback on their performance in tactical decision-making, use of ship's sensors and weapon systems, and reporting procedures. Unlike conventional training simulators, after a student completes each scenario, the TAO ITS also automatically evaluates the student's actions to determine tactical principles that the student has correctly applied or failed to apply. These detailed assessments of student performance are available to both the student and his instructor.
This evaluation is carried out using sophisticated pattern-matching algorithms defined by tactical experts via a graphical user interface, without programming. The student can then learn how to correct his problems by either selecting multimedia training material associated with any principle, or by replaying relevant parts of the last scenario he worked to review his mistakes.
The TAO ITS also helps the student choose the next scenario to practice with. The student can allow the software to choose a scenario that contains untested principles, or other scenarios that test principles recently failed by the student, or simply pick his or her own preferred scenario. The instructor can use a scenario generator included in the software package to create any number of additional scenarios, defining complex behaviors for each friendly and enemy ship and aircraft to create realistic, multi-agent tactical simulations.
The software has three parts: the scenario generator with which instructors, with limited assistance from a programmer, can create any number of simulated scenarios; the student interface, which presents selected scenarios to the student to practice different tactical concepts; and the instructor interface tool so the instructor can review all the students' work with the tutoring system and assess their progress in detail.
Scenario Generator
The scenarios created with the scenario generator can be set in any part of the world, and populated with different surface, air and underwater platforms (i.e., ships, planes, helicopters, missiles, and submarines.) Each individual platform is implemented as an "intelligent agent" and can be given its own performance characteristics and behaviors. For example, a hostile plane will have its own mission such as flying various patterns to search out enemy vessels, and when one is found to attack it.
Since the simulator is free-play, there is no guarantee that any particular concept will actually be tested when a student runs a scenario. For example, if the student orders his ship to head away from an enemy plane and remains concealed from it, an entirely different set of events may play out than those that would if the ship were discovered by the plane. To deal with this aspect of free-play simulators, the TAO ITS has "evaluators"
associated with the concepts. An evaluator is designed
to look for prescribed sequences of events and actions
during a scenario. For example, if a missile is fired at
the student's ship, there may be a range of appropriate
actions he should take in response. A number of
evaluators are set up to examine the chosen actions, and
depending on what they observe, the software may
recognize whether different principles are observed or
not. There is not a one-to-one correspondence between
evaluators and principles. That is, a combination of
evaluated sequences may need to occur to trigger
recognition of observance of one principle, or that one
evaluated sequence may indicate that several principles
were breached.
**Student Interface**
The heart of the intelligent tutoring system is the
student interface that presents selected scenarios to the
student so he can practice different tactical concepts.
The software was designed to adaptively select
scenarios for an individual student who needs to
practice concepts (principles) he has not yet practiced
or ones he has recently failed. It enables a student or
instructor to pick any scenario from all of the ones
available. As well as the intrinsic feedback that free-
play simulations naturally provide, the TAO ITS
provides detailed, useful, extrinsic feedback to the
student once a scenario is finished. This feedback
reviews all the concepts attempted and whether they
were passed or failed. At this point, the student can
review multimedia material about any concept or see a
replay of the scenario to review errors.

While there is a basic physical simulation driving all
the platforms in a scenario, the simulator in the TAO
ITS is inherently conceptual. For example, a tactical
action officer on an AEGIS cruiser works in the
Combat Information Center (CIC), and is supported by
a large number of individuals who provide him with
information and respond to the TAO's orders. To
simulate all of the commands, the TAO ITS provides
the means to implement them. Thus, (as shown in
Figure 2), the left uppermost section of the screen
operates all the ship’s weapon systems, the lower left section issues commands to any supporting aircraft, the upper right section provides control over the ship’s navigation, and the lower right section operates the radar and sonar equipment and displays numerical responses to this equipment. The lower middle panel displays communications from crewmembers, for example, reports of incoming missiles, and these communications can also be heard with the voice synthesizer. The central display panel is a reasonable facsimile of the large screen display in the CIC, and it displays only the information that would be realistically available to the TAO at any time.
Once terminated, the software evaluates the student’s actions by comparing his actions and the circumstances under which he made those actions against the “evaluations,” and prepares an Evaluation Summary (see Figure 3).
The Evaluation Summary lists the situations in which the student demonstrates understanding of concepts (principles) by their correct application. Correct decisions are marked by a green circle. Situations where understanding of concepts was not observed (either by incorrect or omitted actions) are marked by red circles. Also provided are the time and description of the action. The exact principles observed or not observed for any of these situations can be found by clicking on the particular situation. By clicking on any noted principle the student will be taken to multimedia information that explains the principle. The Evaluation Summary form also allows the student to replay the recent scenario from the start or from a selected time.
Instructor Interface Tool (IIT)
The instructor can manage the students as groups using this tool (see Figure 4). It also provides the instructor with tools to manage the hierarchy of instructional principles and the set of multimedia review content, to link specific multimedia review content to principles, and to associate principles and evaluations with specific scenarios.
TAO ITS operates on Windows PCs and can be configured to run in several different ways. This is accomplished by a single installation CD with the installation script asking which of 3 installations is desired – Server, Student, or Instructor (which includes the Student version but adds the scenario generator and IIT). The most useful installation for operational use is multiple users on a network. This requires the installation of the Server version on the network file server, usually a single Instructor version on the instructor’s machine and several Student versions on student machines. Since the student version is installed most (and potentially by the least informed user) the Student version is the default.
When TAO ITS was originally completed and delivered to the SWOS in 1999 it was not seen as a system to be completely maintained by the developers. Rather it was conceived as a shell that SWOS could use to enter principles, their descriptions, scenarios, and evaluation machines to create a complete training system. Since the large majority of the training system maintenance
related to keeping these items up-to-date, SWOS had the capability to maintain and update TAO ITS themselves. The content that was originally delivered with TAO ITS was completely conceived by SWOS. Some of it they had entered, but most of it was entered by the software developers, owing to a lack of available time by the SWOS personnel. Independent studies of students at SWOS have found almost all students had highly favorable reactions. Instructors estimate that students get ten times more tactical decision-making practice now than before TAO ITS was used.
WHY TRANSITION TAO ITS TO THE FLEET
The Problem Description Section describes the general need for TAOs to get more tactical decision-making practice. While the TAO ITS was remarkably successful at achieving this goal at SWOS, shipboard personnel were experiencing the same lack of tactical decision-making practice that the TAO ITS was designed to solve. They had no personal, standalone tactical training capability onboard. The shipboard training systems were mostly designed to provide team training using the actual existing combat system equipment. Thus the only way for the TAO to practice was with the majority of the CIC staffed along with some support personnel, an expensive exercise which provided only limited opportunities. Meanwhile, since the TAO ITS was allowing ten times more tactical-decision-making practice at SWOS and was highly portable (it could run on a laptop), it was natural to try to transition it from a schoolhouse-only environment to additional use in the fleet.
During preliminary visits to various ships and fleet training organizations in 1999, several different fleet uses of TAO ITS were suggested by fleet personnel. Of course the most obvious was standalone TAO training. Other suggested uses were mission rehearsal, battle order development, junior officer training, battle group development and coordination (with a networked version), and head-to-head training (also with a networked version). Some of the more enthusiastic fleet personnel stated that they were ready to use TAO ITS immediately. Based on these preliminary investigations, we began planning for the transition of the TAO ITS to the fleet.
CONSIDERATIONS IN PLANNING THE TRANSITION
Since the TAO ITS would not run on the same computer hardware as the existing on-board tactical or training systems we did not encounter any technical difficulties in transitioning the software to shipboard use. In fact our early visits to ships showed that the PCs onboard the ship and the LANs used to interconnect them, were exactly like those at SWOS and those found in typical office environments.
We anticipated that the largest difference in the use by the fleet would be a greatly increased number of users and a stronger desire to customize TAO ITS to specific ships and their own types of missions. We expected that they would want to define their own ship’s parameters in more detail and with classified information. They would want to author new scenarios and the accompanying intelligent behaviors to control simulated platforms in those scenarios as well as new ship-specific principles and evaluations. With the increased number of users from the fleet would come greatly decreased personal support from SHAI as compared to SWOS. At SWOS, while there were dozens of student users at a time, there were only a handful of instructor users. The use of TAO ITS as a student is very straight-forward as compared to an instructor who needs to author scenarios, specify platform behaviors in those scenarios and create evaluation machines. The few instructor users at SWOS could be easily supported by the TAO ITS software development team. Furthermore, they had the two year development period to slowly become familiar with the software. In transitioning to the fleet, dozens or hundreds of fleet users would be introduced to the tool simultaneously and would have little time becoming familiar with it. Also contact with shipboard personnel is inherently more difficult.
Our primary strategy with dealing with this decreased personal support problem, was to substantially improve the hard copy and online documentation. Most of the user interface was already about as intuitive as it could get, given the basic concepts on which TAO ITS is built. As described later, this documentation effort had little effect.
One concern was the complexity of the installation procedure. This was greatly simplified with defaults supplied as appropriate and additional explanatory documentation written. Furthermore, as described later, a special one page, very simple installation and getting started document was included as hard copy with every 4.0 Alpha and Beta CD. We wanted to make sure the evaluators had no problems with the installation.
We also needed to devise a way that TAO ITS could be delivered as both classified and unclassified versions. This was accomplished by developing two different installation CDs. The primary installation was the
standard TAO ITS 4.0 Beta CD. But since all types of information in TAO ITS which might eventually need to be classified are represented in data files (including ship and weapon parameters, scenarios, evaluation machines, principles, behaviors, and descriptive information), the standard non-classified data can be easily overwritten (or edited by users). Thus the classified CD simply overwrites some of the unclassified files with classified ones. Furthermore, fleet users are free to install TAO ITS on a classified machine then edit the unclassified data with classified data (such as more accurate parameters for their own ship, for example.)
A related problem which could occur was that, for the first time, a large number of users would be editing scenarios, behaviors, principles, and especially platform descriptions, in parallel. We needed to devise a way that when future updates of TAO ITS were delivered, with updated data files, that these updated files would not overwrite the work of fleet personnel. Furthermore, it would be inefficient to have many fleet personnel entering data for a large number of enemy platforms in parallel. The most common platforms desired for scenarios needed to be pre-entered once, before the fleet version was widely released.
The version of TAO ITS developed for SWOS did not include an undersea warfare component. It was felt that the fleet version must include this component at the level of detail appropriate for the TAO. Before development began, it was decided that the concept of operations for the fleet TAO ITS should be determined based on visits to actual ships which would also be helpful in determining what enhancements were required. Time was allocated to answer questions from fleet evaluators, though little of this was ever required, except to conduct onsite observations. Effort was allocated to support SWOS’s entry of content. The majority of the effort was allocated to implementing the anticipated enhancements including better documentation, clearer installation procedures, more ship type definitions, and minor user interface upgrades. Unfortunately, there was not enough budget to make some of the more far-reaching changes such as a networked version for head-to-head and battle group coordination training.
As described above, TAO ITS was intended to be a shell which contained knowledge maintained by naval personnel. But as we developed the fleet version, the purely administrative question arose, “Which naval personnel?” Therefore, SWOS owned and maintained all the TAO ITS content; but one of the advantages for the fleet was the ability to customize the scenarios, behaviors, evaluations, and principles for individual ships. It was eventually decided that SWOS would own the primary tactics, which consisted of the principles, their evaluations, and a basic set of scenarios and behaviors. The individual ships would own the ship modeling parameters, new ship-specific scenarios, and new principles and new evaluations for those principles, without changing the SWOS principles and evaluations.
Although SWOS was extremely satisfied with the TAO ITS, this was partly because they were so involved in defining its functionality. SWOS, NAWC-TSD, and the software development team wanted TAO ITS to be successful in the fleet and all worried about fleet acceptance. NAWC-TSD pointed out very early on about the ramifications of a much larger base of users. Therefore, a cautious approach was taken that began with an Alpha release of the fleet version to NAWC-TSD, SWOS, and a few fleet users. This Alpha release included what were thought to be the minimal enhancements needed for the fleet version. Feedback from this release guided the list of enhancements to be implemented. These were implemented in a Beta version which was released to a wider, more diverse audience of fleet users who represented an unbiased cross section of the fleet as a whole. Once this feedback was received, the version for the entire fleet could be released with more confidence of its acceptance.
**FINDINGS**
The plan described above was followed. The first difficulty encountered was getting the quality of the feedback that we expected from the Alpha version. The primary difficulty was in getting evaluators to spend the time necessary. Few went beyond the simplest use of TAO ITS – playing the role of a student in simulated scenarios and getting debriefed. The comments were entirely positive but didn’t address what we knew to be the potential problems – scenario creation and behavior editing. We did not allocate the time to make sure the evaluators were performing the necessary functions, because we wanted the evaluation to be as realistic as possible: When the TAO ITS would eventually be released, we would not be able to call all fleet users to get them using it. The end result was that the Beta version was created based more on our previous beliefs as to what was required than on the previously anticipated feedback. This primarily related to some redesign of the scenario generator user interface to make it more intuitive, allowing remediation files to be web links, and facilities to allow both classified and unclassified versions. We also were determined to be more active with the Beta version reviewers. Their experience is described below.
Feedback on the fleet usefulness of the Beta version of TAO ITS 4.0 was received from about a dozen fleet personnel ranging in rank from ensign to captain, although lieutenants and lieutenant commanders were the most represented. About half were observed in person. In general, the comments were primarily positive. Almost everyone’s summary comment was something like, “This is a good training tool and it will be useful onboard.” Beyond that summary, there were a large number of specific findings. The time of fleet personnel onboard ship is severely limited. This has several consequences. Personnel are conditioned to rarely read software documentation. For example, as described above, TAO ITS can be configured to run in several different ways. For evaluation purposes, the best installation is to install the Server and Instructor versions on the same machine, the one to be used to perform the evaluation. TAO ITS 4.0 Beta was accompanied by a single piece of paper which had 5 bolded headings — Overview, TAO ITS Installation, Requirements, Server Installation, and Instructor Installation. Overview consisted of two lines, the first being: “This document is designed to guide you through setup of the TAO ITS for evaluation in a standalone environment.” The first two sentences of TAO ITS Installation were: “This section gives detailed instructions for installation of the TAO ITS in a standalone environment (everything is installed on one machine, and there is only one user). This involves performing a Server Installation followed by an Instructor Installation (the directions for both are below).” The server and instructor installations then each listed 6 steps, all but one of which was simply hitting the “Next” button. The other step was choosing either “Server” or “Instructor.” This is also described in the on-line documentation. Yet, the majority of evaluators failed to install the Server version, on which the other versions depend. The only solution to this problem would seem to be to send a separate CDs for evaluators which, with one mouse click, installs all needed versions on a single machine. This will either force us to send a separate CD with a different installation script for operational users, or no operational version will be installable by merely accepting all of the defaults. The needs for operational users are in conflict with the needs of evaluators (even though these may be the same people, just at different times).
As described in the Transition Planning Section, the improvements in user-friendliness that we made were accomplished primarily though the on-line help and on-line Microsoft Word User documentation. These were very rarely consulted. We are devising a method for bringing relevant information to the first-time user’s attention in a more proactive way, when it is relevant.
TAO ITS is highly configurable. New scenarios can be (and are intended to be) created by tactical personnel. Furthermore, the behavior of the entities in the scenario including enemy platforms, friendly platforms, warfare commanders, and team members can all be defined graphically and without programming. However, the fleet personnel were not used to having this kind of capability and the flexibility it provides. Evaluators would only attempt to edit these items when personally prompted to do so. At times there was a problem with fleet personnel understanding the degree to which TAO ITS was configurable. For example, a comment from one evaluator was that in a particular scenario, to make it more realistic, there needed to be more involvement from the warfare commanders. He couldn’t be made to understand that, as an instructor, he could change that behavior himself. Certain types of users could more quickly grasp these configuration capabilities quicker than others, as described below.
The fleet users could be broken down into two different groups based on their skills. The younger members of this group, typically ensigns and lieutenants, could quickly grasp the concepts of creating scenarios and behaviors, and in fact enjoyed doing it. However, this same group tended to either be less disciplined in their use of correct protocol or became overwhelmed by the tactical requirements of playing the scenarios. They didn’t have the good cognitive tactical habits of the other group. The other group tended to be older, and included older lieutenants, Lieutenant Commanders and above. They could much more easily handle the tactical requirements of playing scenarios, but had more difficulties creating scenarios and especially creating the intelligent behaviors of platforms. Since everyone could intuitively use the TAO ITS’s simulation without looking at the documentation, this group’s lower computer skills did not hamper their ability to perform in the simulation. This was an important design goal of the simulation because the ITS assumes mistakes are based on a lack of tactical knowledge, not a failure in knowing how to use the simulation. The existence of these two groups suggests that fleet use at the instructor level would be facilitated by pairing an older, tactical expert with a younger, computer savvy assistant, at least for creating scenarios, creating behaviors, and creating new evaluation machines.
As described above, TAO ITS was conceived as a shell with accompanying content to be maintained by SWOS. We considered SWOS our TAO ITS development partners. However, difficulties arose with this arrangement as we developed the fleet versions. SWOS is not equipped or funded for this role. It was difficult to share the responsibility (programmers responsible for software development and SWOS responsible for
content) for preparing the fleet version of TAO ITS. Anyone at SWOS working on TAO ITS content had other duties as their primary responsibilities. Most damaging was the inevitable Naval staff turnover. As the main SWOS TAO ITS content authors left, they were replaced with newcomers. As the third generation was arriving, the TAO ITS software implementation was substantially complete. This meant a greatly reduced role for the software developers and a greatly reduced presence of those developers at SWOS. This made it difficult to interact with the newcomers to the degree really necessary. A similar problem, but even more difficult to deal with, is now manifesting itself in the fleet. Ship personnel experience a high degree of turnover too. For example, the 1999 XO of a particular AEGIS Destroyer was involved early in the TAO ITS project and was an enthusiastic supporter of its use onboard. He had it installed and evaluated it and determined that it would be useful. One of his assistants was trained in its use. But both have since left the ship. Further discussion below of the findings is broken down by the different software modules they relate to.
**Simulator and Debriefing Findings**
More fleet evaluation use was performed on TAO ITS’s simulator than any other component. This is the component in which the student spends the large majority of his time. The findings were similar to those experienced at SWOS: almost universally positive, with comments like “good training tool” and “good feedback”. Most evaluators tended to group their comments on the simulation and the debriefing together, since the two are so closely tied together, and thus, were very complimentary of both modules. In particular, the simulation appears to be very intuitive for most fleet users, with no need to read the documentation. One set of negative comments related to the fact that TAO ITS does not check for every possible negative act. This is actually a limitation in the current content, not in the current software. But, it is the case that an evaluation machine (or at least one transition in an evaluation machine) must be created to explicitly check for each significant incorrect or omitted action. Typically, the instructors do not create evaluations for grossly incorrect actions if they feel a serious student would never make such a mistake. Again, this particular evaluator didn’t realize that he, himself, could have added an evaluation machine to check for the particular incorrect actions that he was concerned with. Computer savvy fleet evaluators playing the role of students particularly like the debriefing and in-scenario prompting provided by the automated warfare commanders relating to reporting and querying. One improvement they would like to see is the content of a proper SITREP when it is sent. Currently, the SITREP is sent when a button is pushed and the proper content is never shown to the student.
**Instructor Interface Tool (IIT) Findings**
Fleet evaluators created students and groups without any problems. They quickly grasped the difference between scenario files in the Scenario Generator and scenarios in the IIT, although we will need to make the documentation more proactive. Users can quickly grasp the meaning of the student performance displays. Scenario files can be easily copied, but IIT scenarios could use this capability as well.
**Scenario Generator Findings**
Several fleet evaluators were given the task of creating a scenario with no guidance, in order to see which items caused them problems. The user experience was very positive. There were many positive exclamations from the lieutenants as they explored the Scenario Generator (SG) interface. They had fun exploring the interface and creating scenarios. It was especially good that it was simple to change the Ownship platform type since they had the misconception that it was an AEGIS-only tool. The younger group performed significantly better than the older evaluators on most tasks, with larger differences noted on tasks that are more like programming such as the creation of Platform Behaviors in the graphical editor, which they caught on to quickly. Younger evaluators immediately placed platforms with the SG to create scenarios, with older evaluators lagging slightly behind. Evaluators effectively used online help when prompted to do so and said that the online descriptions of the behavior primitives were useful. Younger evaluators would use the simulation for quick sanity checks on scenarios, whereas older evaluators tended to do this with difficulty. Evaluators made good use of pre-existing behaviors.
There was some confusion between editing platform instance characteristics and platform type characteristics. The SG should probably warn the user when he is about to edit a platform type, since this is rarely his intent. Uncharacteristically, the SG crashed on two occasions, but in both cases, the "autosave" feature had preserved most or all of the data. Autosave really helped to keep a positive impression of the tool. The crashes may have been the result of several evaluators using one networked copy of the SG. An undo for the SG (beyond file save/restore) would be useful since some of their experimentation led down false alleys and they would have liked to have been able to undo a change. The unusual method of leaving object
insertion mode using a right click (whereas left click continually adds more simulation objects) continued to cause confusion, even with the existing tip at the bottom of the screen.
One major upgrade the evaluators requested was SAG (Surface Action Group) capabilities. Many scenarios that the users were trying to create involved tasking other ships to deal with a threat. In fact, two students created a behavior whereby a friendly ship would follow and attack a platform that had fired on ownship. They wanted the capability to direct the support ships to take this action (during the simulation) rather than creating a behavior for it. They also wanted to use Sectors for assigning responsibilities and group oriented defenses.
During creation of behaviors, multiple users tried to connect states to existing transitions which is not allowed. The documentation was not clear on the fact that existing transitions can’t be moved from one state to another. They also had a misunderstanding about what the term “Ownership” referred to and how to create logical Ors and Ands with the behavior editor.
**Other Miscellaneous Issues**
Different evaluators had different opinions as to the value of TAO ITS being classified versus unclassified for fleet use. Some felt that the convenience of being able to use it on any ship PC outweighed the slight loss in fidelity of the unclassified data. Others thought that the fidelity was more important. Fortunately, TAO ITS can be set up to run either way easily. A few minor miscellaneous enhancements were suggested, but not required. In fact, most evaluators thought it would be a useful tool for fleet users as-is, without any modifications at all.
One important capability which has been suggested several times during the last couple of years is a multiplayer version. This remains on our desired list of enhancements.
**LESSONS LEARNED**
The following are a list of general lessons learned that should be considered when transitioning an ITS from schoolhouse to fleet use. Many would also be applicable to any advanced training system. First, we found the PCs and LANs used onboard ship for general administrative functions (and available for use by standalone PC-based training systems) are the same as would be expected in a typical office environment and present no technical difficulties for software designed to run on generally available PCs and LANs.
Documentation, no matter how convenient, concise, or well-written is rarely read, including even on-line help. Therefore, you can’t document around an awkward interface feature or difficult concepts. Bringing up needed information proactively the first time the user enters the relevant area is a good idea. Furthermore, shipboard personnel have very little time and are very hard to get hold of. It may be difficult to recruit enough of them for evaluation purposes.
Most importantly, simulation-based ITSs appear to be acceptable to (and even welcomed by) fleet users. Fleet users believe them to be beneficial, although further study will be needed to confirm this.
Software designed for schoolhouse use will often also be useful for training on board ship, with relatively minor enhancements required. The TAO ITS simulation is intuitive for fleet users even though its functionality was designed by SWOS instructors without the fleet in mind per se. Similarly, TAO ITS's student management capabilities were intuitive and easy to use, even though it was designed primarily for SWOS. The flexibility to allow SWOS instructors to customize many aspects of the system is a good capability for fleet users too. The Navy got twice the return on that particular investment, which should be considered for other schoolhouse systems.
It is hard to expect a schoolhouse to enter and maintain significant content, especially without a budget to do so. This is true even if they have the software tools (and enthusiasm) to do so. The time of Navy tactical experts must be explicitly allocated to create the necessary base scenarios. It is important to resolve early what information should be standardized and what information ships can tailor. The keeper of the standardized knowledge and information must be explicitly decided.
When working with both the fleet and schoolhouse, be prepared for a lot of staff turnover. Navy personnel cycle through many positions in about two years, perhaps even less for fleet positions. A project is likely to catch most in the middle of their rotation. The number one defense is to make the software as user-friendly and intuitive as possible.
In general, the user-friendliness versus capability trade-off (given a limited budget) works itself out differently between a schoolhouse and the fleet. Both want very user friendly interfaces for the students (i.e. the simulation). The schoolhouse would rather sacrifice user-friendliness to have more capabilities, given that they have a small number of users, compared to what the fleet users would decide given the same budget.
One solution is to drop or hide some of the capabilities in the fleet version. However, this tends to lead to separate versions for the fleet and the schoolhouse, which is impractical. To obtain the optimal user-friendliness for the fleet with the same capabilities as the schoolhouse, requires a very substantial budget, more than the software developers probably realize.
Since the software developers are involved with the system for a long time and they observe users (at the schoolhouse) who have mostly been involved with the system for a long time, they have a hard time realizing what usability problems really exist and how they should be fixed. It takes experience with a fresh batch of fleet users to shake things up adequately, which is why an early Alpha version release to fleet users is so helpful. If a user interface feature is noticed to be non-intuitive early, it probably always will be until it is redesigned. This seemingly obvious fact can get lost when dealing with a small group of users over a long period of time. For example in TAO ITS, every new user always had to be told about using the right click to turn off the object insertion feature, yet it was never reimplemented. It was never considered important since the number of new users had always been small and spread out over a long period of time.
It may take users a while to understand, appreciate and utilize new capabilities (such as the new capability to configure what used to take programming to change) they are not accustomed to, especially if they are less computer-savvy. Single click installation of an evaluation setup is a good idea for evaluation purposes. Fleet users have different concerns than schoolhouse users. But all fleet users are not the same; they are not one monolithic group, but will be diverse with different strengths and weaknesses in both tactical and software knowledge.
It is a good idea to let the individual ships decide separately between classified and unclassified versions. An added benefit of having so many user-defined model parameters and behaviors is that this information can all be kept in data files, thus keeping the software itself unclassified. It is important to keep the future training system updating process in mind in advance of the first operational fleet release to protect the investment of the users’ effort relating to editing scenarios, behaviors, etc. Similarly, if multiple fleet users want to input the same data (such as a description of a particular threat platform) then that data should be pre-entered, to eliminate redundant entry work of users. Head to head and cooperative versions of training simulations are a good idea. Another alternative is a competitive scoring system. Autosave is a good feature to have in a scenario generator or other software requiring significant user input. It can even maintain a good impression of your tool in the face of unexpected (and uncharacteristic) crashes. "Undo" is also a nice feature if you can afford it, especially for new users. Most fleet users could, with some effort, create scenarios using existing behaviors on their own. More computer-savvy users could create behaviors and edit platform definitions with some effort. Fleet users have difficulty distinguishing between types and instances. (This is a very general problem that we have encountered in several domains and applications.) Alpha and Beta testing with fleet users is important! This is because a new group of users will always try to do things (and have misunderstandings) that you did not anticipate. This will cause you to discover the need for more enhancements than you expected. We found that about half of the needed enhancements could be at least guessed at in advance, but that the other half were completely unanticipated. Therefore, a budget created before the transition project starts based on anticipated enhancements will always be short of what is really needed.
FUTURE WORK
We plan to receive maintenance and support funding for TAO ITS to continue to improve and maintain the product and aid SWOS in the maintenance of its content. We expect these improvements primarily to relate to making it easier for nonprogrammers to create behaviors and evaluation machines. We have received funding to make enhancements specifically to improve TAO ITS’s adaptability to different students and evaluate the effectiveness of those improvements. We hope to get funding to make a networked version for head to head and battle group coordination training.
REFERENCES
|
{"Source-Url": "https://www.stottlerhenke.com/papers/IITSEC-01-TAO.pdf", "len_cl100k_base": 9479, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31839, "total-output-tokens": 10064, "length": "2e13", "weborganizer": {"__label__adult": 0.0010251998901367188, "__label__art_design": 0.0016756057739257812, "__label__crime_law": 0.0027179718017578125, "__label__education_jobs": 0.331787109375, "__label__entertainment": 0.0005397796630859375, "__label__fashion_beauty": 0.0005311965942382812, "__label__finance_business": 0.0023651123046875, "__label__food_dining": 0.0008883476257324219, "__label__games": 0.006412506103515625, "__label__hardware": 0.00475311279296875, "__label__health": 0.00152587890625, "__label__history": 0.0023822784423828125, "__label__home_hobbies": 0.0005507469177246094, "__label__industrial": 0.00226593017578125, "__label__literature": 0.0008978843688964844, "__label__politics": 0.0008525848388671875, "__label__religion": 0.0007901191711425781, "__label__science_tech": 0.12744140625, "__label__social_life": 0.0009255409240722656, "__label__software": 0.1064453125, "__label__software_dev": 0.3955078125, "__label__sports_fitness": 0.0015392303466796875, "__label__transportation": 0.005290985107421875, "__label__travel": 0.0008397102355957031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49828, 0.0034]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49828, 0.30776]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49828, 0.96645]], "google_gemma-3-12b-it_contains_pii": [[0, 3003, false], [3003, 4732, null], [4732, 8275, null], [8275, 13437, null], [13437, 15577, null], [15577, 18690, null], [18690, 23709, null], [23709, 29044, null], [29044, 34740, null], [34740, 40120, null], [40120, 45147, null], [45147, 49828, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3003, true], [3003, 4732, null], [4732, 8275, null], [8275, 13437, null], [13437, 15577, null], [15577, 18690, null], [18690, 23709, null], [23709, 29044, null], [29044, 34740, null], [34740, 40120, null], [40120, 45147, null], [45147, 49828, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49828, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49828, null]], "pdf_page_numbers": [[0, 3003, 1], [3003, 4732, 2], [4732, 8275, 3], [8275, 13437, 4], [13437, 15577, 5], [15577, 18690, 6], [18690, 23709, 7], [23709, 29044, 8], [29044, 34740, 9], [34740, 40120, 10], [40120, 45147, 11], [45147, 49828, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49828, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
82dca5c37ad0e3f0f7d47a141d15dad9e7087903
|
[REMOVED]
|
{"Source-Url": "http://www.cs.tufts.edu/comp/105/homework/small.pdf", "len_cl100k_base": 9178, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42794, "total-output-tokens": 10573, "length": "2e13", "weborganizer": {"__label__adult": 0.0005216598510742188, "__label__art_design": 0.0004520416259765625, "__label__crime_law": 0.0004024505615234375, "__label__education_jobs": 0.0131683349609375, "__label__entertainment": 0.00011307001113891602, "__label__fashion_beauty": 0.0002701282501220703, "__label__finance_business": 0.00026869773864746094, "__label__food_dining": 0.0006651878356933594, "__label__games": 0.0012063980102539062, "__label__hardware": 0.000995635986328125, "__label__health": 0.0004477500915527344, "__label__history": 0.00036215782165527344, "__label__home_hobbies": 0.00022351741790771484, "__label__industrial": 0.0006937980651855469, "__label__literature": 0.0005202293395996094, "__label__politics": 0.00034356117248535156, "__label__religion": 0.0007576942443847656, "__label__science_tech": 0.0088043212890625, "__label__social_life": 0.000293731689453125, "__label__software": 0.004047393798828125, "__label__software_dev": 0.9638671875, "__label__sports_fitness": 0.000507354736328125, "__label__transportation": 0.000885009765625, "__label__travel": 0.0002970695495605469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39235, 0.06012]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39235, 0.6927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39235, 0.88556]], "google_gemma-3-12b-it_contains_pii": [[0, 593, false], [593, 2742, null], [2742, 5268, null], [5268, 8063, null], [8063, 10968, null], [10968, 12735, null], [12735, 15472, null], [15472, 18441, null], [18441, 20862, null], [20862, 22457, null], [22457, 23705, null], [23705, 26778, null], [26778, 29728, null], [29728, 32505, null], [32505, 34382, null], [34382, 35238, null], [35238, 39144, null], [39144, 39235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 593, true], [593, 2742, null], [2742, 5268, null], [5268, 8063, null], [8063, 10968, null], [10968, 12735, null], [12735, 15472, null], [15472, 18441, null], [18441, 20862, null], [20862, 22457, null], [22457, 23705, null], [23705, 26778, null], [26778, 29728, null], [29728, 32505, null], [32505, 34382, null], [34382, 35238, null], [35238, 39144, null], [39144, 39235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 39235, null]], "pdf_page_numbers": [[0, 593, 1], [593, 2742, 2], [2742, 5268, 3], [5268, 8063, 4], [8063, 10968, 5], [10968, 12735, 6], [12735, 15472, 7], [15472, 18441, 8], [18441, 20862, 9], [20862, 22457, 10], [22457, 23705, 11], [23705, 26778, 12], [26778, 29728, 13], [29728, 32505, 14], [32505, 34382, 15], [34382, 35238, 16], [35238, 39144, 17], [39144, 39235, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39235, 0.01896]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
10b8368ea8488355b9d2b27670ce8e5be5be025a
|
Abstract
Many organizations today support physical, virtual, and cloud-based systems across a wide range of operating systems. Providing least privilege access to systems can be a complex mesh of sudoers files, profiles, policies, and firewall rules. While configuration management tools such as Puppet or Chef help ensure consistency, they do not inherently simplify the process for users or administrators. Additionally, current DevOps teams are pushing changes faster than ever. Keeping pace with new services and applications often force sysadmins to use more general access rules and thus expose broader access than necessary. Rundeck is a web-based orchestration platform with powerful ACLs and ssh-based connectivity to a wide range of operating systems and devices. The simple user interface for Rundeck couples with DevOps-friendly REST APIs and YAML or XML configuration files. Using Rundeck for server access improves security while keeping pace with rapidly changing environments.
1. Introduction
Simplicity and least privilege access are two pillars of information security. A complex system introduces more defects and is hard to test (Schneier, 1999). NIST guidelines state "…organizations should base access control policy on the principle of least privilege…” (Swanson & Guttman, 1996). However, simplicity and least privilege can be at odds with each other in an organization supporting a disparate collection of operating systems, network devices, and hybrid cloud environments. Each operating system for these devices has a different access control implementation. Unix and Linux have sudo and SELinux, Windows has Group Policies, and network devices have various options. This system complexity combined with elastic cloud environments and DevOps change velocity results in conflict with security policies and access controls (Lawler, 2015).
Least privilege access controls are often implemented with Role-Based Access Controls (RBAC) but are "often difficult or costly to achieve because it is difficult to tailor access based on various attributes or constraints” (Hu, Ferraiolo & Kuhn, 2006). It is difficult for a single system to keep pace with automated configuration management and dynamic virtual environments. There are still times when a user needs an interactive session (e.g. SSH) to complete a task. However, the majority of commands can be processed centrally, either interactively or scheduled, and outlier requests can be flagged and granted temporary access. Although many tools enable configuration management and remote access, Rundeck stands out as a robust solution for controlling access by combining a user-friendly interface and granular access controls.
Rundeck is an open source orchestration, job scheduler, and runbook automation platform written in Java (rundeck.org, 2016). The same tools and scripts DevOps, SysAdmin, and Security teams use today can be implemented with granular access controls in Rundeck. In turn, these scripts and tools can be safely delegated to different groups in a controlled manner. This delegation of access can be version-controlled and scripted to keep pace with DevOps teams. These scripted executions are logged centrally in the Rundeck database. Operations and development teams maintain their release
John Becker, jbecker42@gmail.com
velocity while security teams see least privilege access, improved logging, and a shared tool for incident response.
2. Rundeck Basics
Some refer to Rundeck as “the Swiss army knife for ops” (rundeck.org, 2016). The application’s principle function is to execute jobs (scripts or commands) on target nodes. The web user interface, command line, API, and scheduler are all job triggers. Jobs can be bundled together for tasks such as automated runbooks, software deployments, and incident response.
2.1. Setup
Rundeck is available as a Java jar-based “launcher” install, Debian or Ubuntu package, or an RPM package. The examples in this document will use Ubuntu 16.04. Begin with installing Java:
```
sudo apt-get install openjdk-7-jdk
```
Then download the latest Rundeck Debian package and verify the SHA hash from http://rundeck.org/downloads.html.
```
wget http://dl.bintray.com/rundeck/rundeck-deb/rundeck-2.6.8-1-GA.deb
shasum rundeck-2.6.8-1-GA.deb
865c669c8694a9b6fa595363c9906cf771818337 rundeck-2.6.8-1-GA.deb
```
Check the SHA hash matches what is posted on rundeck.org, then install the package and start the rundeckd service.
```
sudo dpkg -i rundeck-2.6.8-1-GA.deb
sudo service rundeckd start
```
By default, Rundeck runs as a non-privileged user “rundeck” on 0.0.0.0 (all interfaces) on TCP port 4440. Users can login as “admin” for the default username and password at http://<hostname>:4440. The built-in HSQLDB database suffices for small instances, but most production use cases need a dedicated relational database (Rundeck...
MySQL is the preferred database with support for MS-SQL as well (Schueler, 2015). Scaling Rundeck is outside the scope of this paper.
2.2. User Authentication
Rundeck supports three types of authentication: PropertyFileLoginModule (/etc/rundeck/realm.properties), LDAP, and PAM. Many organizations prefer LDAP for centralized access. The PAM module works well in settings where users are managed locally with configuration management (e.g. Puppet). PropertyFileLoginModule is the default and sufficient for test instances of Rundeck. The PropertyFileLoginModule supports three types of hashing or obfuscation for passwords: OBF, MD5, and CRYPT. These are built-in to the Jetty project used by Rundeck.
Of the supported types, OBF is the least secure as it is a reversible obfuscation method (Jetty, 2016). MD5 has been insecure for password hashing for nearly 20 years (Dobbertin, 1996). CRYPT is the UnixCrypt Java Class (Jetty Source Code, 2016) that is limited to a 56bit DES algorithm (Class UnixCrypt, 2016). Numerous superior tools exist for encrypting, auditing, and managing LDAP and PAMPropertyFileLoginModule. Therefore, LDAP and PAM are the recommended production authentication methods.
2.3. Hardening
A tool like Rundeck has keys to access a broad range of critical systems in an organization. Rundeck servers should be placed in a protected enclave because of the need for confidentiality and integrity (Rome, n.d.). Inbound connections are limited to HTTPS for users and SSH for Rundeck administration. Outbound connections depend on the node plugins used, but typically require SSH on port 22 at a minimum. After securing network access to the Rundeck, care should be taken to configure and monitor the host itself.
2.3.1. File System
Rundeck installs with file permissions for the directory /etc/rundeck set to 655 and owned by root. Files inside of /etc/rundeck have ownership of "rundeck" and group "rundeck" with 640 file permissions. These items are sensitive as they comprise Access Control Lists (*.aclpolicy), user accounts (realm.properties), and core configuration files that include passwords (*.properties). From an operating system perspective, this may be
acceptable as the "rundeck" user has no shell access. However, the Rundeck application has a loophole wherein users can execute jobs locally as the "rundeck" user in Linux. This access allows for non-privileged Rundeck users to read and modify critical items in /etc/rundeck when executing jobs or ad-hoc commands against the "localhost" node. Restricting "localhost" targets to Rundeck administrators prevents inadvertent access to critical configurations. An additional layer of protection is to change the file ownership inside /etc/rundeck to “root:rundeck”. The 640 file permissions will allow the "rundeck" user read access, but limit writes to the root user. Finally, files in /etc/rundeck should be monitored closely for any changes with a Host IDS program such as Tripwire or OSSEC.
2.3.2. Remove Default Access
The first step for a production Rundeck instance setup should be to replace the admin user login per CSC 5.3 (Critical Security Controls, 2015). The default account resides in the last section of /etc/rundeck/realm.properties. Delete or comment out the section below if using LDAP or PAM. If forced to use the PropertyFileLoginModule authentication, change the ‘admin' username and password and store in MD5 format per the instructions at http://rundeck.org/docs/administration/authenticating-users.html.
John Becker, jbecker42@gmail.com
Introduction to Rundeck for Secure Script Executions
2.3.3. Enabling Security Transport
Rundeck listens in cleartext at http://<hostname>:4440. Cleartext HTTP transport is a bad practice for system administration. CSC 3.4 states strong encryption (TLS) should be used (Critical Security Controls, 2015). There are two options for providing HTTPS transport security. The first choice is to enable SSL per the instructions at http://rundeck.org/docs/administration/configuring-ssl.html. This configuration works and results in the Rundeck service listening at https://<hostname>:4443. However, there is another option to use a web proxy for SSL termination in front of Rundeck. Apache has the added benefits of listening on the standard HTTPS port of 443, enabling support for Multi-Factor Authentication, as well as providing options for additional logging and web application firewalls.
The example below will focus on Apache for SSL termination, reverse-proxy, and Google Authenticator. The first step is to install Apache and enable the required proxy and SSL modules:
```
sudo apt-get install apache2
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod rewrite
sudo a2enmod deflate
sudo a2enmod headers
sudo a2enmod proxy_connect
sudo a2enmod proxy_html
sudo a2enmod ssl
```
Next, configure the SSL certificate and private key used by Apache in /etc/ssl/certs and /etc/ssl/private. A standard x.509 SSL certificate and key is available from providers such as Verisign or Microsoft Certificate Services. For this paper, a private certificate will be used:
```
# This sets the default user accounts for the Rundeck app
#
admin:admin,user,admin,architect,deploy,build
```
John Becker, jbecker42@gmail.com
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/rundeck-apache.key -out /etc/ssl/certs/rundeck-apache.crt
Once the certificate and key are ready, apply the configurations below for Apache to redirect (mod_proxy) and terminate SSL connections (mod_ssl). These settings will force HTTP requests to use HTTPS, terminate HTTPS connections with TLS and strong ciphers, and connect to Rundeck at 127.0.0.1 instead of the public interface.
/etc/apache2/sites-available/rundeck-redirect-port80.conf
```bash
<VirtualHost _default_:80>
ServerName rundeck-prod.galactica.test:80
Redirect permanent / https://rundeck-prod.galactica.test/
</VirtualHost>
```
John Becker, jbecker42@gmail.com
/etc/apache2/sites-available/rundeck-ssl.conf
<VirtualHost _default_:443>
ServerName rundeck-prod.galactica.test:443
ServerAlias *.galactica.test
SSLProxyEngine On
SSLEngine On
ProxyPreserveHost On
SetEnv proxy-sendchunked
ProxyVia On
ProxyRequests Off
ErrorLog ${APACHE_LOG_DIR}/ssl_error_log
TransferLog ${APACHE_LOG_DIR}/ssl_transfer_log
LogLevel warn
SSLProtocol TLSv1
SSLCipherSuite HIGH:!aNULL:!MD5
SSLHonorCipherOrder On
SSLCertificateFile /etc/ssl/certs/rundeck-apache.crt
SSLCertificateKeyFile /etc/ssl/private/rundeck-apache.key
RequestHeader set Front-End-Https "On"
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Port 443
Header add Strict-Transport-Security "max-age=631138519; includeSubdomains; preload"
Header add X-Frame-Options SAMEORIGIN
Header add X-Content-Type-Options nosniff
Header add X-XSS-Protection "1; mode=block"
ProxyPass / http://127.0.0.1:4440/ keepalive=On
ProxyPassReverse / http://127.0.0.1:4440/
</VirtualHost>
John Becker, jbecker42@gmail.com
With the configuration files in /etc/apache2/sites-available, remove any defaults from /etc/apache2/sites-enabled and copy over the new rundeck configs. Restart Apache to load new configs.
```bash
cd /etc/apache2/sites-enabled
sudo rm 000-default.conf
sudo ln -s /etc/apache2/sites-available/rundeck-* .
```
Configure the “grails.serverURL” value in /etc/rundeck/rundeck-config.properties to accept requests for the HTTPS URL. Restart Rundeck to load the config.
```
# change hostname here
grails.serverURL=https://rundeck-prod.galactica.test
```
Finally, set the “-Drundeck.jetty.connector.forwarded=true” in /etc/rundeck/profile to retain the XFF header information (see http://rundeck.org/docs/administration/configuring-ssl.html#using-an-ssl-terminated-proxy for more information). Rundeck now has TLS, strong-cipher termination in Apache. Additional modules are available for multi-factor authentication (e.g. Google Authenticator) and web application firewall protection (e.g. ModSecurity).
### 2.4. Node Definitions
Nodes are servers, VMs, instances, or devices that Rundeck can access via SSH or connection plugins. The default option for node definitions is to use the built-in provider that is managed by XML files. Small and relatively static environments function fine with the default provider. Larger, dynamic networks benefit from node provider plugins such as the AWS EC2 or PuppetDB plugins.
One of the more compelling features of Rundeck is how it handles metadata. Rundeck nodes have attributes that describe the instance (e.g. Name, IP address, operating system, etc.). The node provider manages attributes in a key value format (e.g. the EC2 plugin will populate the AWS EC2 attribute "instanceId=i-389f2cdk" for a corresponding Rundeck node). Tags are a type of attribute used for classifications or categories (Rundeck User Guide, 2016). The combination of tags and attributes allow for dynamic targeting of hosts for jobs and ad-hoc commands.
John Becker, jbecker42@gmail.com
2.5. Connectivity
A wide range of plugins is available for Rundeck to expand access beyond the default SSH connectivity (Rundeck.org, n.d.). The WinRM (Windows Remote Management) plugin is available for native Windows commands. An alternative is to use SSH on Windows with OpenSSH server or similar. In other cases, Rundeck functions as a front-end for Puppet, Ansible, or Chef commands. Careful review is needed to determine what connectivity and access Rundeck should have in an organization. On one end of the spectrum, Rundeck could have root or admin-equivalent privileges on all devices and rely exclusively on internal ACLs for controls. Other teams could use multiple Rundeck instances with restricted service accounts to reduce the impact of a Rundeck server compromise.
2.6. Key Management
Rundeck’s Key Storage system stores private keys as either local files or as BLOBS in the attached database. Neither solution is encrypted unless using a Storage Converter plugin (Rundeck Administrator Guide, 2016). Storing keys in an external database such as MySQL makes them available for multiple Rundeck instances but also has an increased attack surface. Use the Storage Converter plugin for encrypting keys kept in the database.
Using the filesystem storage for the Rundeck Key Storage with the Storage Converter for encryption is possible, but the tradeoff between security and availability may not be worth it. Restarts of Rundeck would require reading the Storage Converter password from a local source (file, dongle, etc.) or manually typed in at startup. Either way, service startups are more complicated, and the encryption key is potentially accessible in memory or on disk. A reasonable balance with confidentiality and availability is to encrypt the local filesystem within the OS or Virtual Machine and keep the Rundeck Key Storage in cleartext.
John Becker, jbecker42@gmail.com
2.7. Jobs and Commands
Rundeck provides a user interface and API for executing and scheduling commands or jobs. If something works over SSH, then it is most likely going to work as a Rundeck job or command. Ad-hoc commands are singular executions shell commands against target nodes. They can be as simple as ‘uptime’ or a complex string of piped commands. Jobs are an ordered set of steps comprised of CLI commands, shell scripts, and other Rundeck jobs. When sequenced into a job, these steps can perform complex orchestrations such as load-balancer failovers, operating system patching, and test script executions.
Jobs can take variable input with “options” that are entered via the UI or from external option providers (e.g. Jenkins). Similarly, a job step script can be a remote file accessed via a file path or URL. Remote options and scripts improve flexibility with version control, but also introduce added complexity for securing the content. Instead of just securing Rundeck, remote option and script providers are also in scope for hardening and audit.
2.7.1. Scheduling
Many Rundeck users focus on the scheduling feature (Edwards, 2014). Scheduled jobs execute at set times similar to cron services in Linux or schtasks in Windows. Using Rundeck for scheduling centralizes visibility into the tasks and outputs running on a network. Incident responders will look for scheduled tasks on hosts as possible logic bombs or other signs of intrusion (Kral, 2011). Moving scheduled tasks to Rundeck allows disabling cron and schtasks services. In turn, this makes detecting unauthorized scheduled tasks easier on hosts.
2.7.2. Job Naming
Naming jobs may seem straightforward at first until an extensive collection of jobs and roles exist in Rundeck. As the jobs and ACL policy files grow, the Rundeck administrator becomes a bottleneck to align access controls with jobs. This bottleneck leads to contention for group names and rigid rules for job creation. An alternative method is to include the role names within the job name itself. A job called "Change Password" would become "Change Password (basic_user, power_user)"). The keywords
John Becker, jbecker42@gmail.com
basic_user and power_user are in regex patterns inside ACLs. Anytime a job is added or changed it need only include the correct role keyword to be usable. This layout significantly reduces the overhead for managing ACL policies.
Below are screenshots of the same set of jobs but with ACLs limiting “read” access for the basic_user role. In this situation, the power_user role can execute both jobs directly, but the basic_user role can only view “Change Password (basic_user,power_user).”
**power_user View**
Jobs (2) Filter » Expand All Collapse All
- Library
- Change Password (power_user) » Changes password for a given user
- User Management
- Change Password (basic_user,power_user) » This job changes a user’s password. Choose the node to change the password on.
**basic_user View**
Jobs (1) Filter » Expand All Collapse All
- User Management
- Change Password (basic_user,power_user) » This job changes a user’s password. Choose the node to change the password on.
### 2.7.3. Job Groups
A good starting point for grouping jobs is to separate out jobs by function. User Management, Release, Patching are all examples of good top-level groups. A "Library" group of jobs can be referenced by multiple other jobs while keeping the contents of the scripts hidden. For example, a generic "Change Password" job can be created as a parent job that is executed by users. This job will target a node and pass a "username" option to a Library job that is typically not viewed directly by users. The Library job contains the actual scripts for executing the commands. Basic user ACLs reference the Library group for "run" but not "read" access. This approach enables execution while preventing most users from viewing any sensitive information.
John Becker, jbecker42@gmail.com
2.8. Access Control Lists
Access Control Lists (ACLs) in Rundeck permit role-based access with a high degree of granularity. Resources in Rundeck are denied by default until explicitly allowed by ACLs. ACL policies manage privileges at many levels within Rundeck including projects, jobs, nodes, ad-hoc commands, key storage, and the Rundeck API itself. Policy files are written in YAML and reside in the /etc/rundeck directory, but can also be created using the System ACLs API and the Project ACLs API.
For a straightforward and easily audited set of ACLs, start with standard .aclpolicy files in the /etc/rundeck directory. Create one .aclpolicy file per group or role and name the file to match (e.g. an ACL policy file for the group "operations" should be called "operations.aclpolicy"). Test the resulting ACLs after any changes.
Roles for example ACLs
<table>
<thead>
<tr>
<th>Role</th>
<th>ACL File Name</th>
<th>Project Access</th>
<th>Node Access</th>
<th>Job Access</th>
</tr>
</thead>
<tbody>
<tr>
<td>Power User</td>
<td>power_user.aclpolicy</td>
<td>All</td>
<td>All</td>
<td>Create, update, run</td>
</tr>
<tr>
<td>Admin</td>
<td>admin.aclpolicy</td>
<td>All</td>
<td>All</td>
<td>All</td>
</tr>
<tr>
<td>Basic User</td>
<td>basic_user.aclpolicy</td>
<td>Support, Network</td>
<td>Production</td>
<td>“basic” jobs</td>
</tr>
</tbody>
</table>
### 2.8.1. ACL Breakdown
This is the default admin.aclpolicy from /etc/rundeck enables full access to everything for any member of the “admin” group.
<table>
<thead>
<tr>
<th>ACL Policy</th>
<th>Description</th>
<th>Context</th>
<th>Resource</th>
<th>Allowations</th>
</tr>
</thead>
<tbody>
<tr>
<td>Admin, all access.</td>
<td>project: <code>.*</code> # all projects</td>
<td>for: resource:</td>
<td>- allow: <code>.*</code> # allow read/create all kinds</td>
<td></td>
</tr>
<tr>
<td>Admin, all access.</td>
<td>application: 'rundeck'</td>
<td>for: resource:</td>
<td>- allow: <code>.*</code> # allow create of projects</td>
<td></td>
</tr>
<tr>
<td>Admin, all access.</td>
<td>project:</td>
<td>- allow: <code>.*</code> # allow view/admin of all projects</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Admin, all access.</td>
<td>project_acl:</td>
<td>- allow: <code>.*</code> # allow admin of all project-level ACL policies</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Admin, all access.</td>
<td>storage:</td>
<td>- allow: <code>.*</code> # allow read/create/update/delete for all /keys/* storage content</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Admin, all access.</td>
<td>by:</td>
<td>group: admin</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Rundeck ACL policy files define what actions (read, create, update, delete, admin, enable_executions, disable_executions, configure, import, export) apply to resources (project, system, system_acl, user, job, storage, job, node, ad-hoc, or event). The full
John Becker, jbecker42@gmail.com
options for resources and actions are available at http://rundeck.org/docs/administration/access-control-policy.html. Options available within ACLs make for accurate, if not complicated, rulesets. Structuring the ACL policy files for flexibility can prevent frustration in job management. The approach recommended in this paper is as follows:
Deny Rundeck server access. By default, this is "localhost" and can be referenced in the ACL with the “nodename” node resource property.
```
node:
- match:
nodename: 'localhost'
deny: '*'
- match:
nodename: '*'
allow: '*'
```
Limit Rundeck administration functions to a single role (Admin in the example below). Admin access includes the ability to create/modify projects, modify project ACLs, and modify key storage. The "application: ‘rundeck’" code blocks are more restrictive for non-Admin roles.
**Admin Access**
```
description: Admin, all access.
context:
application: 'rundeck'
for:
resource:
- allow: '*' # allow create of projects
project:
- allow: '*' # allow view/admin of all projects
project_acl:
- allow: '*' # allow admin of all project-level ACL policies
storage:
- allow: '*' # allow read/create/update/delete for all /keys/* storage content
by:
group: admin
```
John Becker, jbecker42@gmail.com
Basic User Access
description: Basic User, restricted access.
context:
application: 'rundeck'
for:
resource:
- equals:
kind: system
allow: [read] # allow read of resources
project:
- match:
name: ['Support','Network']
allow: [read] # Allow read access to specific projects
storage:
- allow: [read] # Allow allow read access for using ssh keys
by:
group: basic_user
Use regular expressions to target specific access to keywords in Job names. Deny read access to prevent users from viewing jobs and projects they should not access. Their user interface will be cleaner, and there are fewer chances for accidental privilege escalation. The example below will enable the basic_user role to run any job with the keyword “basic_user” in the name as well as prevent viewing of jobs in the Library group.
Introduction to Rundeck for Secure Script Executions
Basic_user ACL
description: Basic User, restricted access.
context:
project: '.*' # all projects
for:
resource:
- allow: '.*' # allow read/create all kinds
adhoc:
- allow: '.*' # allow read/running/killing adhoc jobs
job:
- match:
name: '.*basic_user.*'
allow: [read,run,kill] # allow read/run/kill of jobs
- match:
group: 'Library.*'
allow: [run,kill] # allow read/run/kill of all jobs
node:
- allow: '.*' # allow read/run for all nodes
by:
group: 'basic_user'
2.9. Logging and Execution History
Rundeck has excellent history and full output logging features (Oyster.com Tech Blog, 2016). Standard output from commands is available as "activity" along with the time, date, and user who executed the job or ad-hoc command. Activity history resides in the database and is searchable via the Rundeck user interface.
Rundeck uses log.4j and logs write to /var/log/rundeck in the package distribution. Key logs for security review are rundeck.audit.log (ACL decisions), rundeck.jobs.log (changes to jobs), rundeck.log (general application messages included execution activity).
Full details on Rundeck logging and formatting are available at http://rundeck.org/docs/administration/logging.html.
3. Example Use Case for Incident Response
While Rundeck supports many different use cases, incident response is a good example for security teams. Incident handling has 6 phases: Preparation, Identification,
Introduction to Rundeck for Secure Script Executions
Containment, Eradication, Recovery, and Lessons Learned (Kral, 2011). Rundeck fits within the Preparation (Access Control and Tools), Identification (Event gathering), and Containment (Short-term Isolation, Backup, Long-term Isolation) phases. Rundeck could also prove useful in the Eradication and Recovery phases as a Build/Deployment/Release tool. For this example, Rundeck is used for accessing, finding, containing, and removing intruders.
3.1.1. Preparation
The Preparation phase focuses on team readiness for handling an incident with little or no notice (Kral, 2011). Rundeck is a useful tool to access and review systems remotely. Incident response teams can build regular jobs to retrieve logs, hash critical files, check permissions, as well as make changes such as update firewall rules or apply patches. Many scripts used for hunting and investigations work well as Rundeck jobs.
3.1.2. Identification
A responder can use the Rundeck jobs created in the Preparation phase to determine whether an incident has occurred. These jobs are helpful for any systems or patterns not included in IDS systems. Ad-hoc commands are useful for hunting specific patterns across many devices. For example, the ad-hoc command below will return the hostname, md5sum for /etc/init/ssh.conf, and the number of files in /etc/init in a CSV format. Copy and save the output as a CSV file for analysis.
3.1.3. Containment
The Containment phase goal is to prevent further damage and to preserve evidence (Kral, 2011). Compromised nodes tagged with a unique identifier, such as the Incident ID "inc50298" in this example, are easy to group in Rundeck. Containment jobs can then be
John Becker, jbecker42@gmail.com
targeted against nodes by Incident ID when “tags: ${option.Incident_ID}” is set for the
node target. Short-term containment can be as simple as updating firewalls or network
deVICES to stop traffic to a given host. Preservation can take the form of a snapshot or
backup of a compromised host. Finally, long-term containment step is to remove any
backdoors or malware to return the device to production use. Below is an example
Rundeck containment job:
1. Step 1 is a Library job reference. The target node's IP address passes to the
Library job "Library/IR/Isolate Node". The Isolate Node job will update the
firewall rules to block traffic to the IP address.
2. Step 2 performs a snapshot of the node from the backup server. The Library job
“Library/IR/Snapshot Node” runs a snapshot script for the given hostname.
3. Step 3 uses a script path "file://ir/cleanup/${option.Incident_ID}.sh" to reference
a unique script for this incident cleanup. The path will expand to the filesystem
location on Rundeck as “/ir/cleanup/inc50298.sh". In this script are containment
commands unique for this incident. Running the job with an empty script is fine if
full cleanup steps are not ready.
4. Conclusion
Rundeck provides a user-friendly interface for scheduling and executing commands against a dynamic inventory of devices. Central, agentless access to network devices, Windows servers, Linux VMs, and cloud instances is very powerful for developers, admins, and security teams alike. All teams benefit from increased visibility to execution history, logging, and centralized task scheduling. The coordination between security teams and DevOps teams have improved collaboration with access control and key management.
However, a single point of access for systems is also a key target for intruders. Protecting this access begins with hardening Rundeck itself using TLS and permission changes. A common approach to job and ACL layout ensures correct access, but is still agile, for modern environments. Maintaining clear roles and ACL mappings results in low-stress least privilege access.
As teams adopt Rundeck for everyday tasks, the history of those tasks becomes more valuable for auditors and incident responders. Furthermore, executions outside of Rundeck are highlighted for possible investigation. Rundeck itself is useful during incident response for gathering information and containment. Security teams can quickly tag compromised nodes and run targeted discovery and containment jobs. This approach works with other tools and scripts. However, the built-in features in Rundeck make for a simple solution without requiring extensive customization.
Introduction to Rundeck for Secure Script Executions
References
John Becker, jbecker42@gmail.com
# Upcoming Training
<table>
<thead>
<tr>
<th>Event Name</th>
<th>Location</th>
<th>Dates</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS Secure Canberra 2020</td>
<td>Canberra, Australia</td>
<td>Mar 23, 2020 - Mar 28, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Philadelphia 2020</td>
<td>Philadelphia, PA</td>
<td>Mar 30, 2020 - Apr 04, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS 2020</td>
<td>Orlando, FL</td>
<td>Apr 03, 2020 - Apr 10, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Middle East April 2020</td>
<td>Dubai, United Arab Emirates</td>
<td>Apr 11, 2020 - Apr 16, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Minneapolis 2020</td>
<td>Minneapolis, MN</td>
<td>Apr 13, 2020 - Apr 18, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Bethesda 2020</td>
<td>Bethesda, MD</td>
<td>Apr 13, 2020 - Apr 18, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Boston Spring 2020</td>
<td>Boston, MA</td>
<td>Apr 20, 2020 - Apr 25, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Baltimore Spring 2020</td>
<td>Baltimore, MD</td>
<td>Apr 27, 2020 - May 02, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Pen Test Austin 2020</td>
<td>Austin, TX</td>
<td>Apr 27, 2020 - May 02, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS Cybersecure Catalyst Women Academy SEC401</td>
<td>Brampton, ON</td>
<td>May 04, 2020 - May 09, 2020</td>
<td>Community SANS</td>
</tr>
<tr>
<td>SANS Amsterdam May 2020</td>
<td>Amsterdam, Netherlands</td>
<td>May 11, 2020 - May 18, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Security West 2020</td>
<td>San Diego, CA</td>
<td>May 11, 2020 - May 16, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS-Cybersecure Catalyst New Canadians Academy SEC401</td>
<td>Brampton, ON</td>
<td>May 11, 2020 - May 16, 2020</td>
<td>Community SANS</td>
</tr>
<tr>
<td>SANS Northern Virginia- Alexandria 2020</td>
<td>Alexandria, VA</td>
<td>May 17, 2020 - May 22, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS San Antonio 2020</td>
<td>San Antonio, TX</td>
<td>May 17, 2020 - May 22, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Autumn Sydney 2020</td>
<td>Sydney, Australia</td>
<td>May 18, 2020 - May 23, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>CS-Cybersecure Catalyst New Career Academy SEC401</td>
<td>Brampton, ON</td>
<td>May 19, 2020 - May 24, 2020</td>
<td>Community SANS</td>
</tr>
<tr>
<td>SANS Nashville Spring 2020</td>
<td>Nashville, TN</td>
<td>May 26, 2020 - May 31, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Atlanta Spring 2020</td>
<td>Atlanta, GA</td>
<td>May 26, 2020 - May 31, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS London June 2020</td>
<td>London, United Kingdom</td>
<td>Jun 01, 2020 - Jun 06, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Chicago Spring 2020</td>
<td>Chicago, IL</td>
<td>Jun 01, 2020 - Jun 06, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Las Vegas Summer 2020</td>
<td>Las Vegas, NV</td>
<td>Jun 08, 2020 - Jun 13, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Zurich June 2020</td>
<td>Zurich, Switzerland</td>
<td>Jun 15, 2020 - Jun 20, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Silicon Valley - Cupertino 2020</td>
<td>Cupertino, CA</td>
<td>Jun 22, 2020 - Jun 27, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>Cyber Defence Japan 2020</td>
<td>Tokyo, Japan</td>
<td>Jun 29, 2020 - Jul 11, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Cyber Defence Canberra 2020</td>
<td>Canberra, Australia</td>
<td>Jun 29, 2020 - Jul 11, 2020</td>
<td>Live Event</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.giac.org/paper/gsec/38768/introduction-rundeck-secure-script-executions/152421", "len_cl100k_base": 8613, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 53092, "total-output-tokens": 10845, "length": "2e13", "weborganizer": {"__label__adult": 0.0003654956817626953, "__label__art_design": 0.0004382133483886719, "__label__crime_law": 0.0016946792602539062, "__label__education_jobs": 0.00217437744140625, "__label__entertainment": 0.00013518333435058594, "__label__fashion_beauty": 0.0001900196075439453, "__label__finance_business": 0.0011968612670898438, "__label__food_dining": 0.00023818016052246096, "__label__games": 0.0008282661437988281, "__label__hardware": 0.0029239654541015625, "__label__health": 0.0003800392150878906, "__label__history": 0.00030875205993652344, "__label__home_hobbies": 0.0001652240753173828, "__label__industrial": 0.0008831024169921875, "__label__literature": 0.00023877620697021484, "__label__politics": 0.00038552284240722656, "__label__religion": 0.00036215782165527344, "__label__science_tech": 0.09222412109375, "__label__social_life": 0.00017690658569335938, "__label__software": 0.233642578125, "__label__software_dev": 0.66015625, "__label__sports_fitness": 0.00024628639221191406, "__label__transportation": 0.0003151893615722656, "__label__travel": 0.00015544891357421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38377, 0.03756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38377, 0.13181]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38377, 0.80556]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 994, false], [994, 3324, null], [3324, 4880, null], [4880, 7073, null], [7073, 8435, null], [8435, 10151, null], [10151, 10871, null], [10871, 11891, null], [11891, 13899, null], [13899, 15800, null], [15800, 17986, null], [17986, 19775, null], [19775, 21095, null], [21095, 22251, null], [22251, 23564, null], [23564, 24409, null], [24409, 25925, null], [25925, 27687, null], [27687, 28894, null], [28894, 29797, null], [29797, 30367, null], [30367, 33258, null], [33258, 34278, null], [34278, 38377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 994, true], [994, 3324, null], [3324, 4880, null], [4880, 7073, null], [7073, 8435, null], [8435, 10151, null], [10151, 10871, null], [10871, 11891, null], [11891, 13899, null], [13899, 15800, null], [15800, 17986, null], [17986, 19775, null], [19775, 21095, null], [21095, 22251, null], [22251, 23564, null], [23564, 24409, null], [24409, 25925, null], [25925, 27687, null], [27687, 28894, null], [28894, 29797, null], [29797, 30367, null], [30367, 33258, null], [33258, 34278, null], [34278, 38377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38377, null]], "pdf_page_numbers": [[0, 0, 1], [0, 994, 2], [994, 3324, 3], [3324, 4880, 4], [4880, 7073, 5], [7073, 8435, 6], [8435, 10151, 7], [10151, 10871, 8], [10871, 11891, 9], [11891, 13899, 10], [13899, 15800, 11], [15800, 17986, 12], [17986, 19775, 13], [19775, 21095, 14], [21095, 22251, 15], [22251, 23564, 16], [23564, 24409, 17], [24409, 25925, 18], [25925, 27687, 19], [27687, 28894, 20], [28894, 29797, 21], [29797, 30367, 22], [30367, 33258, 23], [33258, 34278, 24], [34278, 38377, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38377, 0.14198]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
4a66226379b5249728fbe348128405e83a855510
|
[REMOVED]
|
{"Source-Url": "http://www.research.ed.ac.uk/portal/files/9844934/2013_Enforcing_QVT_R_with_Mu_calculus_and_Games.pdf", "len_cl100k_base": 11369, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 49004, "total-output-tokens": 13493, "length": "2e13", "weborganizer": {"__label__adult": 0.0003669261932373047, "__label__art_design": 0.0003864765167236328, "__label__crime_law": 0.00036525726318359375, "__label__education_jobs": 0.0006198883056640625, "__label__entertainment": 8.535385131835938e-05, "__label__fashion_beauty": 0.0001666545867919922, "__label__finance_business": 0.0002765655517578125, "__label__food_dining": 0.0004017353057861328, "__label__games": 0.0010652542114257812, "__label__hardware": 0.0006933212280273438, "__label__health": 0.0004458427429199219, "__label__history": 0.00027751922607421875, "__label__home_hobbies": 8.71419906616211e-05, "__label__industrial": 0.0004940032958984375, "__label__literature": 0.0004603862762451172, "__label__politics": 0.0003001689910888672, "__label__religion": 0.0005292892456054688, "__label__science_tech": 0.036376953125, "__label__social_life": 8.618831634521484e-05, "__label__software": 0.006343841552734375, "__label__software_dev": 0.94921875, "__label__sports_fitness": 0.0003261566162109375, "__label__transportation": 0.000591278076171875, "__label__travel": 0.00020015239715576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48698, 0.01744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48698, 0.21515]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48698, 0.86417]], "google_gemma-3-12b-it_contains_pii": [[0, 1437, false], [1437, 3973, null], [3973, 7373, null], [7373, 10737, null], [10737, 13829, null], [13829, 17604, null], [17604, 21016, null], [21016, 23906, null], [23906, 26755, null], [26755, 30363, null], [30363, 33708, null], [33708, 36739, null], [36739, 39850, null], [39850, 42912, null], [42912, 45849, null], [45849, 48698, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1437, true], [1437, 3973, null], [3973, 7373, null], [7373, 10737, null], [10737, 13829, null], [13829, 17604, null], [17604, 21016, null], [21016, 23906, null], [23906, 26755, null], [26755, 30363, null], [30363, 33708, null], [33708, 36739, null], [36739, 39850, null], [39850, 42912, null], [42912, 45849, null], [45849, 48698, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48698, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48698, null]], "pdf_page_numbers": [[0, 1437, 1], [1437, 3973, 2], [3973, 7373, 3], [7373, 10737, 4], [10737, 13829, 5], [13829, 17604, 6], [17604, 21016, 7], [21016, 23906, 8], [23906, 26755, 9], [26755, 30363, 10], [30363, 33708, 11], [33708, 36739, 12], [36739, 39850, 13], [39850, 42912, 14], [42912, 45849, 15], [45849, 48698, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48698, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
3603ca726c1cfe7abac07922ad5af54b8cc15478
|
Due date: Thursday March 19th 11:59:59pm EST.
Points: This lab is worth 12 points (out of 200 points in 6.004).
Getting started: To create your initial Lab 4 repository, please visit https://6004.mit.edu/web/spring20/user/labs/lab4. Once your repository has been created, you can clone it into your workspace by running:
git clone git@github.mit.edu:6004-spring20/labs-lab4-{YourMITUsername}.git lab4
Turning in the lab: To turn in this lab, commit and push the changes you made to your git repository. After pushing, check the course website (https://6004.mit.edu, Labs/Didit tab) to verify that your submission was correctly pushed and passes all required tests. If you finish the lab in time but forget to push, you will incur the standard late submission penalties.
Check-off meeting: After turning in this lab, you are required to go to the lab for a check-off meeting by Wednesday, April 1st. The checkoffs for this lab will begin on Friday, March 13th.
To be able to checkoff this lab you must complete all of the exercises except Exercise 9 and answer all Discussion Questions. You must pass all of the “Mandatory” tests in Didit. Exercise 9 is optional must be completed to receive full credit.
Introduction
In this lab you will build a 32-bit Arithmetic Logic Unit (ALU). An ALU performs arithmetic and bitwise-logical operations on integers. It is an important component of computer processors and many other digital systems. You will build an ALU for the RISC-V processor that you will design in later labs.
The first part of the lab focuses on building a functionally correct ALU. You will first design and implement a variety of combinational circuits that implement different operations, then combine them to construct an ALU for the RISC-V processor.
The second part of the lab examines the performance and cost of several ALU components by using synth. The last exercise concerns the design and evaluation of a fast adder.
Implementation restrictions: In this lab you will build several circuits for which Minispec already has operators. Therefore, you cannot use these operators in your circuits. Specifically, you are not allowed to use the following operators in your logic: Arithmetic operators (\(+\ -\ \*\ /\ %\ \ll\ \gg\)), relational operators (\(<\ \\leq\ \geq\ >\ <\)), and variable bit indexing (\(a[b]\), where \(a\) and \(b\) are both Bit#(n) variables).
Bitwise-logical operators (\& | ^ ~), equality/inequality operators (\(==\ !=\)), conditional expressions and statements (?, if case), and loops are allowed. You will get an illegal operator error if your circuit, when synthesized, makes use of any forbidden operator.
There is one exception to this rule: you can use any operators on Integer types. Integer expressions are always evaluated during compilation (e.g., in parametric functions, loops, etc.) so operations on Integers never produce circuits with these operations. For example, the following code is allowed:
```minispec
for (Integer i = 0; i < 32; i = i + 1) begin
c[i] = a[i] ^ b[i];
end
```
Although the code uses the < and + operators, it does so on Integer variable \(i\). The loop is unrolled and the resulting circuit does not perform any of those operations; it just performs bitwise XOR. (Note that you would not need to write a loop to achieve the same result; you could just write \(c = a \^ b;\).)
Describing compact circuits: In this lab, we will ask you to re-use functions from one exercise to the next. Please be aware that every time you call a function, you are creating hardware, even if you discard the result of that function. We will denote the restrictions on the use of your functions by asking you to use a SINGLE call to a particular function. If receive a warning about using more than one copy of a function when building your code, please reread this section. These restrictions will be enforced at checkoff and you will be required to fix your code, even if you pass the tests in Didit.
For example, suppose you have implemented your 32-bit ripple-carry adder, rca#(32), and you want to add 4 or 8 to a, depending on the value of foo. The following implementations create two adders:
```vhdl
// BAD: two adders!
Bit#(32) a = ...;
Bit#(32) ret = 0;
if (foo)
ret = rca#(32)(a, 4, 0);
else
ret = rca#(32)(a, 8, 0);
// BAD: also two adders!!
Bit#(32) ret = foo ? rca#(32)(a, 4, 0)
: rca#(32)(a, 8, 0);
// BAD: also two adders!!!
Bit#(32) ret = rca#(32)(a, 8, 0);
if (foo)
ret = rca#(32)(a, 4, 0);
```
These all produce two adders because Minispec synthesizes all function calls to hardware. In order to synthesize just one adder, consider storing the arguments to your function in a separate temporary variable:
```vhdl
// BETTER: one adder
Bit#(32) a = ...;
Bit#(32) b = foo ? 4 : 8;
Bit#(32) ret = rca#(32)(a, b, 0);
```
Minispec resources: We recommend that you complete the Minispec combinational logic tutorial before jumping into the exercises. We especially recommend that you review Sections 5 and 8 through 10, which were not needed in lab 3 but are useful for this lab.
Building and Testing Your Circuits
You can build your code with make. If you just run make it will build all of the exercises. You can instead pass in a target so that only one of the exercises will be built, like so:
```
make <target>
```
This will then create a program Tb_<target> that you can run which simulates the circuit. It will run through a set of test cases printing out each input and whether or not it fails to match the expected output. If it passes all the tests it will print out PASSED at the end.
- To build all the targets, run: make all
To build and test everything, run: make test
Finally, you may want to test a particular function that the staff-provided tests do not cover. For example, you may have built your function out of smaller functions; if the whole function is not working properly, you’d want to test the smaller functions first. You can use the ms eval command for this purpose. ms eval takes two arguments: the file where the function is, and the call to the function in quotes. For example, to test function barrelRShift from the first exercise, you can run:
```
ms eval ALU.ms "barrelRShift(32'hBEBCAFE, 4, 1)"
```
Part 1: Designing a Functionally Correct ALU
An arithmetic-logic unit (ALU) is the part of a processor that carries out the arithmetic and logic operations specified by each instruction. Many of these operations (e.g., addition and subtraction) can be performed by the same logic with minor changes. Therefore, rather than building a different circuit for each operation, each of these exercises asks you to first build a circuit that performs a single operation and then to reuse it or extend it to perform more operations. This results in a better design, as different operations share logic, and also results in simpler code.
1 32-bit Shifter
In these exercises, you will implement 32-bit shifters for logic and arithmetic operations. You will implement your shifters by building on the barrel shifter design from Lecture 8.
**Exercise 1 (13%)**: First, implement a general 32-bit barrel right shifter `barrelRShift`. `barrelRShift` can shift in ones or zeros as specified by input `sft_in`. Then, using a SINGLE call to `barrelRShift`, build `sr32`, a 32-bit arithmetic and logical right shifter. A logical right shift shifts in zeros, whereas an arithmetic right shift shifts in the most significant bit of the input (for two's complement numbers, this divides by $2^{sftSz}$).
**Hint**: To replicate 1-bit `sft_in` to N bits, use the Minispec function `signExtend`, which extends the input bit by copying its MSB. Section 5 of the Minispec combinational logic tutorial explains `signExtend()` further.
Fill your code in the following skeleton functions in `alu.ms`:
```plaintext
// 32-bit right barrel shifter
// Arguments: in (value to be shifted); sftSz (shift size); sft_in (bit value shifted in)
function Bit#(32) barrelRShift(Bit#(32) in, Bit#(5) sftSz, Bit#(1) sft_in);
// 32-bit arithmetic/logical right shifter
// arith = 1, arithmetic shift; logical shift otherwise
function Bit#(32) sr32(Bit#(32) in, Bit#(5) sftSz, Bit#(1) arith);
```
Test your design by running: `make sr32 && ./Tb_sr32`
**Exercise 2 (6%)**: Using a SINGLE call to `barrelRShift`, implement `sll32`, a 32-bit logical left shifter. Your implementation of `sll32` should not mirror the implementation of `barrelRShift`.
**Hint**: To construct a right shifter from a left shifter, you just need to reverse the bits of the input and output. The `reverseBits` Minispec function does this.
Fill your code in the following skeleton function in `alu.ms`.
```plaintext
function Bit#(32) sll32(Bit#(32) in, Bit#(5) sftSz); // 32-bit logical left shifter
```
Test your design by running: `make sll32 && ./Tb_sll32`
Exercise 3 (6%): Using a SINGLE call to barrelRShift, implement a 32-bit FULL shifter, sft32, which has all three shift operations: left shift, logical right shift, and arithmetic right shift. Your implementation should have only one barrel shifter.
Note: You cannot implement this function by calling your sr32 and sl132 functions because there would be one barrel shifter in each of them. You will need to merge the techniques you used in those functions and call barrelRShift directly.
Note: This function is the first to use an enum argument. Enums are covered in Section 9 of the Minispec combinational logic tutorial.
Fill your code in the following skeleton function in ALU.ms.
```c
// 32-bit FULL shifter
typedef enum {LogicalRightShift, ArithmeticRightShift, LeftShift} ShiftType;
function Bit#(32) sft32(Bit#(32) in, Bit#(5) sftSz, ShiftType shiftType);
```
Test your design by running: make sft32 && ./Tb_sft32
2 32-bit Comparator
In these exercises, you will implement a 32-bit comparator for both signed and unsigned numbers. These exercises will leverage your knowledge of two’s complement encoding from Lecture 1.
In general, you can build n-bit comparators by using a chain or a tree of 1-bit comparators. We discuss the chain approach below. Section 9 of the Minispec combinational logic tutorial includes an example of building comparators using a tree approach. The tree approach uses function composition to recursively decide whether a is less than, equal to, or greater than b. The tree approach is faster, but will require you to write a parametric function. You are free to choose either approach.
The chain approach compares two unsigned 32-bit values a and b by comparing their bits one by one left-to-right, i.e., from the most-significant bit (MSB) to the least-significant bit (LSB), as shown in Figure 1. In the figure, $a_i$ and $b_i$ denote the $i^{th}$ bit of $a$ and $b$, respectively. $eq_i$ is 1 if $a$ and $b$ are equal from the MSB until their $i^{th}$ bit, i.e., if $a[N-1:i] == b[N-1:i]$. Likewise, $lt_i$ is 1 if $a$ is less than $b$ from the MSB until the $i^{th}$ bit, i.e., if $a[N-1:i] < b[N-1:i]$. We are interested in $lt_0$, which will be 1 if $a$ is less than $b$ and 0 otherwise.

Exercise 4 (13%): First, implement a one-bit comparator cmp (each of the blocks in Figure 1). Then, use it to construct an unsigned 32-bit less-than comparator ltu32.
Hint: To build the one-bit comparator, we recommend you derive the Boolean equations for $eq_i$ and $lt_i$ as functions of inputs $eq_{i+1}$, $lt_{i+1}$, $a_i$, and $b_i$. Note that $eq_i$ depends only on $eq_{i+1}$, $a_i$, and $b_i$: for $a$ and $b$ to be equal down to the $i^{th}$ bit, they must be equal down to the $(i+1)^{th}$ bit and bits $a_i$ and $b_i$ must be equal. However, $lt_i$ does depend on all four inputs: for $a$ to be less than $b$ down to the $i^{th}$ bit, either $a$ is less than $b$ down to the $(i+1)^{th}$ bit (in which case all lower-order bits don’t matter), or $a$ is equal to $b$ down to the $(i+1)^{th}$ bit and $a_i$ is less than $b_i$.
Finally, note that it doesn’t make sense for $eq_i$ and $lt_i$ to both be 1. Therefore, this is an illegal input and the output in this case is undefined. In other words, you can return any value you like if $eq_i$ and $lt_i$ are both 1.
To build the 32-bit comparator, feed inputs $eq_N = 1$ and $lt_N = 0$ to the left-most one-bit comparator. Fill your code in the following skeleton functions in `ALU.ms`.
```verilog
// one-bit less-than comparator
// Arguments: a, b (1-bit values), eq, lt (eq and lt from previous comparator)
// Return: {eq_i, lt_i}
function bit#(2) cmp(bit#(1) a, bit#(1) b, bit#(1) eq, bit#(1) lt);
function bit#(1) ltu32(bit#(32) a, bit#(32) b); // unsigned 32-bit less-than comparator
```
- To test the 1-bit comparator, run `make cmp && ./Tb_cmp`
- To test the unsigned 32-bit less-than comparator, run `make ltu32 && ./Tb_ltu32`
**Exercise 5 (6%)**: Using a SINGLE call to `ltu32`, implement `lt32`, a *signed/unsigned* 32-bit less-than comparator.
**Hint 1**: Consider bit string $a = a_{N-1}a_{N-2} \ldots a_1a_0$. If we interpret $a$ as a two’s complement signed binary number, it represents the value:
$$v_{\text{signed}} = -2^{N-1}a_{N-1} + \sum_{i=0}^{N-2} 2^i a_i$$
However, if we interpret $a$ as an unsigned binary number, it represents the value:
$$v_{\text{unsigned}} = \sum_{i=0}^{N-1} 2^i a_i = +2^{N-1}a_{N-1} + \sum_{i=0}^{N-2} 2^i a_i$$
The only difference between the signed and unsigned representations is that the weight of the most significant bit is negative in the signed case and positive in the unsigned case. Therefore, to perform a signed comparison using an unsigned comparator, you only need to tweak the most significant bit of inputs $a$ and $b$, i.e., the inputs to the leftmost comparator. To derive how you should change the inputs to the leftmost comparator, think about what a less-than comparison becomes when you change the signs of the numbers you’re comparing.
**Hint 2**: To choose between signed and unsigned comparison, you only need the `isSigned` input to control the most significant bit of inputs $a$ and $b$ of the unsigned comparator.
```verilog
// Signed/Unsigned 32-bit less-than comparator
// isSigned, signed comparison; unsigned otherwise
function bit#(1) lt32(bit#(32) a, bit#(32) b, bit#(1) isSigned);
```
Test your design by running: `make lt32 && ./Tb_lt32`
3. **$n$-bit Ripple-Carry Adder/Subtractor**
In these exercises you will implement an $n$-bit ripple-carry adder, then use it to build a single circuit that performs addition and subtraction.
These exercises are the first that require you to implement *parametric functions*, i.e., functions where the arguments and/or output have generic bit-widths. In this case, bit-widths are given by the `Integer` parameter $n$. Section 8 of the Minispec combinational logic tutorial covers parametric functions.
**Exercise 6 (6%)**: First, implement a *full adder* circuit `fullAdder`. Then, construct an $n$-bit ripple-carry adder `rca#(n)`.
Note that the output of the adder is only \( n \) bits wide instead of \( n + 1 \). This is slightly different than what we’ve seen in lecture, but in RISC-V, every operand and output has the same width (32 bits for RV32). Because the inputs and output have the same width, adding both operands may cause an overflow.
Fill your code in the following skeleton functions in \texttt{alu.ms}.
```vhdl
// full adder
function Bit#(2) fullAdder(Bit#(1) a, Bit#(1) b, Bit#(1) carryIn);
// n-bit ripple-carry adder
function Bit#(n) rca#(Integer n)(Bit#(n) a, Bit#(n) b, Bit#(1) carryIn);
```
**Exercise 7 (6%)**: Using a SINGLE call to \texttt{rca#(n)}, implement \texttt{addSub#(n)}, a 2’s complement adder/subtractor.
\textit{Hint:} Recall that \( a - b \) is equivalent to \( a + (\neg b) \). In two’s complement representation, \( \neg b \) is equal to flipping all the bits of \( b \) and adding 1.
Fill your code in the following skeleton function in \texttt{alu.ms}.
```vhdl
// n-bit ripple-carry adder/subtractor
// returns a - b is isSub==1 and a + b otherwise
function Bit#(n) addSub#(Integer n)(Bit#(n) a, Bit#(n) b, Bit#(1) isSub);
```
Test your design by running \texttt{make addSub32} & \texttt{./Tb_addSub32}
## 4 32-bit Arithmetic Logic Unit
Now that you have built the ALU’s main components, it is time to build the full ALU. Table 1 shows the functions that the RISC-V processor needs the ALU to perform.
<table>
<thead>
<tr>
<th>ALU Function</th>
<th>Operation</th>
<th>Output</th>
</tr>
</thead>
<tbody>
<tr>
<td>Add</td>
<td>32-bit ADD</td>
<td>( a + b )</td>
</tr>
<tr>
<td>Sub</td>
<td>32-bit SUBTRACT</td>
<td>( a - b )</td>
</tr>
<tr>
<td>And</td>
<td>32-bit AND</td>
<td>( a & b )</td>
</tr>
<tr>
<td>Or</td>
<td>32-bit OR</td>
<td>( a \mid b )</td>
</tr>
<tr>
<td>Xor</td>
<td>32-bit XOR</td>
<td>( a \triangledown b )</td>
</tr>
<tr>
<td>Slt</td>
<td>Set if less than signed</td>
<td>( a <_s b ) 1 : 0</td>
</tr>
<tr>
<td>Sltu</td>
<td>Set if less than unsigned</td>
<td>( a <_u b ) 1 : 0</td>
</tr>
<tr>
<td>Sll</td>
<td>SHIFT Left Logical</td>
<td>( a \ll b[4:0] )</td>
</tr>
<tr>
<td>Srl</td>
<td>SHIFT Right Logical</td>
<td>( a \gg b[4:0] )</td>
</tr>
<tr>
<td>Sra</td>
<td>SHIFT Right Arithmetic</td>
<td>( a \gg_s b[4:0] )</td>
</tr>
</tbody>
</table>
Table 1: RISC-V ALU functions
**Exercise 8 (15%)**: Using a SINGLE call to \texttt{addSub} (an adder/subtractor), \texttt{lt32} (a signed/unsigned comparator), and \texttt{sft32} (a full shifter), implement \texttt{alu}, a 32-bit ALU for a RISC-V processor.
\textit{Hint:} Use zeroExtend to extend the 1-bit output of \texttt{lt32} to 32 bits. This extends the input with zeros.
Fill in following skeleton function in \texttt{alu.ms}.
```vhdl
typedef enum {Add, Sub, And, Or, Xor, Slt, Sltu, Sll, Srl, Sra} AluFunc;
function Bit#(32) alu(Bit#(32) a, Bit#(32) b, AluFunc func);
```
Test your design by running \texttt{make alu} & \texttt{./Tb_alu}
6
Part 2: Analyzing and Improving Circuit Performance
So far we have focused on describing circuits in Minispex and simulating them to test their functionality, without paying much attention to their implementation. Now that you have designed your first large digital circuit, it is time we start looking at implementation tradeoffs. Like in lab 3, we will use the synth synthesis tool to translate your Minispex code into optimized gate-level implementations.
In this part of the lab, we will first synthesize and analyze several circuits. These exercises will build your intuition of delay and area implementation tradeoffs and of the optimizations synth can perform. We will then study delay-area tradeoffs of our barrel shifter implementation. Finally, you will (optionally) build a better adder to improve your ALU’s performance. We will discuss synth’s usage options along the way; for a quick reference, run synth -h.
5 Analyzing the Ripple-Carry Adder
First, synthesize your \( n \)-bit ripple-carry adder function with \( n = 4 \) bits, i.e., function \( \text{rca#}(4) \), by running:
```
synth ALU.ms "rca#(4)"
```
synth reports three pieces of information. First, it reports three summary statistics: the number of gates, the total area these gates take (in square micrometers), and the circuit’s critical-path delay (i.e., the longest propagation between any input-output pair) in picoseconds. Second, it reports the delay across the different gates of the critical path. Third, it shows a breakdown of the types of gates used and their area.
synth can also produce circuit diagrams. Run:
```
synth ALU.ms "rca#(4)" -v
```
This will produce a diagram in \( \text{rca}_4\.svg \). Open by running `inkview rca_4\.svg` (or use another SVG viewer).
By default, synth uses the basic standard cell library, which only has a buffer, an inverter, and two-input NAND and NOR gates. You can control the library used with the \(-l\) flag. (Note that that’s the lowercase letter \( L \), not the digit 1.) Let’s try using the extended library, which also has AND, OR, XOR, and XNOR gates, as well as 2, 3, and 4-input gates. Run:
```
synth ALU.ms "rca#(4)" -l extended
```
**Warm-up Question 1:** How does \( \text{rca#}(4) \) change when synthesized with the extended library? What gates are used now vs. with the basic library? How does this affect area and delay? (You can visualize the new circuit by running `synth ALU.ms "rca#(4)" -l extended -v && inkview rca_4\.svg`.)
By default, synth tries to minimize delay by setting a target delay of 1 ps. You can control the target delay with the \(-d\) flag. Lax delays will cause the synthesis tool to optimize for area instead. This way, you can trade off delay for area (in current technology power is also a crucial consideration, even more so than area; but power analysis is a complex topic, so in this course we will focus on area and delay). For example, the following command uses a target delay of 1000 ps:
```
synth ALU.ms "rca#(4)" -l extended -d 1000
```
**Warm-up Question 2:** Synthesize \( \text{rca#}(4) \) using the command above. How do its area and delay change vs. the previous (delay-optimized) circuit?
synth also performs some optimizations, such as Boolean simplification. Take a look at the add4_rca
and add4_addSub functions in Adders.ms. Both implement the same function, adding two 4-bit numbers.
However, add4_rca uses your rca#(4) circuit with the carry-in argument set to 0, whereas add4_addSub uses
your addSub#(4) circuit with isSub set to 0.
**Warm-up Question 3:** Synthesize add4_rca and add4_addSub. Why don’t the circuits differ? Give
one example of how Boolean simplification is helping produce this result.
Finally, let’s analyze how area and delay grow with the number of inputs.
**Warm-up Question 4:** Synthesize your rca#(n) function with $n \in \{4, 8, 16, 32, 64\}$. How do you
intuitively expect the area and delay of the circuit to grow as you increase the number of bits in the
input? Do the results match your expectation?
### Discussion Questions
The following Discussion Questions are worth 5% of your grade. Please write your answers in the
provided file discussion_questions.txt. You can update your answers before your checkoff meeting,
but you are required to submit an initial answer to each question when you submit the lab.
#### 6 Multiplexers and Fanout
Muxes.ms has several implementations of a 2-bit multiplexer:
- `mux2_sop` uses a minimal sum-of-products representation.
- `mux2_select` uses the conditional (ternary) operator.
- `mux2_if` uses if-else blocks.
- `mux2_case` uses a case statement.
Each of these may result in different implementations.
**Warm-up Question 5:** Synthesize the above functions. Why don’t their implementations meaning-
fully differ?
Now let’s focus on critical-path analysis. Figure 2 shows a sample critical-path analysis from synth (this
is for the seven-segment decoder circuit from lab 3). This analysis shows how delay grows over the logic
gates in the critical path. Each row corresponds to a single gate. The *gate delay* column reports the gate’s
propagation delay (from its input becoming a valid and stable digital value to its output becoming a valid
and stable digital value). The *fanout* column reports the number of inputs that the output of the gate is
driving. For example, the NOR3 gate in Figure 2 contributes the longest delay, 43.4 ps, and its output
is connected to the inputs of five gates (among them, the NOR2 gate that’s next in the critical path).
The first row in this table always corresponds to an input, and the last row corresponds to an output.
Note that the first row has some delay because there is always a gate (a buffer in our case) driving every
input pin. By contrast, the output has no gate delay, as the output of the last gate is simply the output.
In general, the gate’s propagation delay depends on two main factors: the complexity of the gate and
how loaded the gate’s output is.
The complexity of the gate is technology-specific; as we saw in Lecture 6 (and we will see in more detail
in Lecture 9), in current technology (CMOS), inverting logic is faster than non-inverting logic (so you’ll see
the synthesis tool use inverting gates much more often), and gates with more inputs are slower (e.g., NAND3
is slower than NAND2).
How loaded a gate is also affects its delay. The output of each gate has a certain *drive strength*, which
refers to how much current the gate can draw from the power supply to control the output voltage. In turn,
Critical-path delay: 127.19 ps
Critical path: binary_number[3] -> out[0]
<table>
<thead>
<tr>
<th>Gate/port</th>
<th>Fanout</th>
<th>Gate delay (ps)</th>
<th>Cumulative delay (ps)</th>
</tr>
</thead>
<tbody>
<tr>
<td>binary_number[3]</td>
<td>3</td>
<td>7.6</td>
<td>7.6</td>
</tr>
<tr>
<td>INV</td>
<td>7</td>
<td>18.9</td>
<td>26.5</td>
</tr>
<tr>
<td>NAND2</td>
<td>2</td>
<td>12.2</td>
<td>38.7</td>
</tr>
<tr>
<td>NOR3</td>
<td>5</td>
<td>43.4</td>
<td>82.1</td>
</tr>
<tr>
<td>NOR2</td>
<td>3</td>
<td>13.4</td>
<td>95.5</td>
</tr>
<tr>
<td>NAND3</td>
<td>5</td>
<td>23.0</td>
<td>118.5</td>
</tr>
<tr>
<td>NAND2</td>
<td>1</td>
<td>8.7</td>
<td>127.2</td>
</tr>
<tr>
<td>out[0]</td>
<td>0</td>
<td>0.0</td>
<td>127.2</td>
</tr>
</tbody>
</table>
Figure 2: Example critical-path analysis report.
Each gate input is essentially a capacitor, so the capacitance at the output of a gate grows the more gate inputs we connect to it. In short: the higher the gate’s drive strength, the faster it can switch the output between 0 and 1; and the more inputs the gate’s output is connected to, the more capacitance that needs to be charged or discharged and the slower the output will switch between 0 and 1.
While the exact relationship between output load and delay is complex, a good first-order approximation is to consider that delay grows linearly with fanout, the number of inputs connected to each output. Indeed, Figure 2 shows that gates with higher fanout have higher delay (e.g., compare both NAND2 gates).
You should consider the effect of fanout when designing circuits. For example, it is common to believe that the delay of a multiplexer does not depend on its width—after all, a 1-bit and a 32-bit mux both have the same levels of logic. But as the number of bits grows, the single select signal drives a growing number of gates, i.e., its fanout grows.
Muxes.ms has a parametric $n$-bit multiplexer $\text{mux#}(n)$. Synthesize $\text{mux#}(n)$ for $n \in \{4, 16, 64, 256\}$ using a really high delay target, e.g., 10000 ps, like so:
```
synth Muxes.ms "mux#(64)" -l extended -d 10000
```
Discussion Question 1 (1%): How do you intuitively expect the delay of the circuit to grow as you increase the number of bits in the input? Does the result match your expectations? How is this delay distributed across the nodes of the critical path?
To ameliorate the effect of high fanout, tools automatically insert chains or trees of buffers and inverters to reduce the load at each gate. For example, consider a single gate that is driving 64 inputs, which causes excessive delay. The tool could instead connect the gate to four inverters, each of the four inverters to another four inverters, and each of these 16 inverters to four of the original 64 gates. Each inverter adds some internal delay, but each gate in the tree now only drives four other gates.
Synthesize $\text{mux#}(n)$ for $n \in \{4, 16, 64, 256\}$ again, but this time do not specify a target delay so that the muxes are delay-optimized instead of area-optimized, like so:
```
synth Muxes.ms "mux#(64)" -l extended
```
Discussion Question 2 (1%): How do you expect the delay and area to grow as you increase the width for these delay-optimized multiplexers? Do the results match your expectations?
Standard cell libraries often include gates of different sizes. Larger gates have higher drive strength, but they also take more area and have higher capacitance at its inputs, so they take more effort to switch. Mixing gates of different sizes gives the synthesis tool more freedom to trade area for delay. For example, the tool can use larger gates along the critical path to reduce delay, and smaller gates elsewhere to keep area low.
Synth has a multisize library that includes the same gates as extended, but each gate has variants of multiple sizes (denoted X1, X2, X4, and X8). Synthesize $\text{mux#}(64)$ with the multisize library, like so:
7 Analyzing your ALU
Synthesize your ALU with the multisize library and optimizing for delay:
```shell
synth ALU.ms alu -l multisize
```
Also synthesize the ALU’s main components: addSub32, lt32, and sft32. Use the same settings.
**Discussion Question 4 (1%)**: Report the area, delay, and gate count of the ALU and each component. Based on these results, which of the main components determines the ALU’s critical-path delay? Which component consumes more of the ALU’s area?
In building the full shifter, we emphasized using a single barrel shifter at its core to perform all functions (left/right shifts and arithmetic/logical shifts). Now consider an alternative shifter implementation, sft32_alt in ALU.ms. **sft32_alt** calls your full barrel shifter three times. This will instantiate several copies of the barrel shifter; because the tool performs Boolean simplification, each copy will be optimized for each operation.
Synthesize the sft32_alt circuit.
**Discussion Question 5 (1%)**: Which shifter implementation takes less area, and which has a shorter delay? Which variant is more appropriate for your ALU? (you can modify your ALU to check, but you don’t need to).
Completing the previous exercises correctly is the minimum required to checkoff this lab. To get full credit on the lab, complete the exercise below as well.
8 One Last Design Problem: Building a Better Adder
**Exercise 9 (24%)**: Processors use several adders, so it pays off to optimize them. Your task is to implement a faster 32-bit adder. Your score depends on how fast you can make the adder. If your adder achieves a critical-path delay $d$,
- For $d > 400$ ps, score = 0
- For $d \in (350 \text{ ps}, 400 \text{ ps}]$, score = 5
- For $d \in (300 \text{ ps}, 350 \text{ ps}]$, score = 10
- For $d \in (250 \text{ ps}, 300 \text{ ps}]$, score = 15
- For $d \in (200 \text{ ps}, 250 \text{ ps}]$, score = 20
- For $d \leq 200$ ps, score = 24
You can implement any type of adder you want. Your design can take as much area as needed.
There are “Optional” tests in Didit for each of the score cutoffs above. You do not need to pass all of them but you will only receive points for the ones that you pass.
We recommend you try implementing either of the two designs from Lecture 9, a carry-select adder or a carry-lookahead adder. Carry-select adders are easy to code, and a properly designed recursive carry-select adder will be fast enough to earn full credit. Carry-lookahead adders are more sophisticated and harder to get right but will be faster if properly implemented. This could be useful in Design Project at the end of the term when you are trying to optimize a processor for maximum performance. Any of the carry-lookahead adder variations should be fast enough to earn full credit.
Implement your fast adder in the following skeleton function in **ALU.ms**.
```
// N-bit fast adder
function Bit#(n) fastAdd#(Integer n)(Bit#(n) a, Bit#(n) b, Bit#(1) carryIn);
```
Test your design by running make fastAdd32 & .Tb_fastAdd32
Synthesize your design by running synth ALU.ms "fastAdd#(32)" -l multisize
|
{"Source-Url": "https://6004.mit.edu/web/_static/spring20/labs/Lab4Handout.pdf", "len_cl100k_base": 8619, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34687, "total-output-tokens": 9223, "length": "2e13", "weborganizer": {"__label__adult": 0.001003265380859375, "__label__art_design": 0.001735687255859375, "__label__crime_law": 0.0007119178771972656, "__label__education_jobs": 0.0240478515625, "__label__entertainment": 0.00036072731018066406, "__label__fashion_beauty": 0.0006394386291503906, "__label__finance_business": 0.0005540847778320312, "__label__food_dining": 0.0012426376342773438, "__label__games": 0.0028247833251953125, "__label__hardware": 0.06890869140625, "__label__health": 0.0010404586791992188, "__label__history": 0.0010547637939453125, "__label__home_hobbies": 0.00125885009765625, "__label__industrial": 0.00498199462890625, "__label__literature": 0.00044465065002441406, "__label__politics": 0.0007405281066894531, "__label__religion": 0.0014095306396484375, "__label__science_tech": 0.34716796875, "__label__social_life": 0.00035119056701660156, "__label__software": 0.00986480712890625, "__label__software_dev": 0.525390625, "__label__sports_fitness": 0.0012483596801757812, "__label__transportation": 0.0027027130126953125, "__label__travel": 0.0004730224609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31392, 0.03544]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31392, 0.63499]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31392, 0.8576]], "google_gemma-3-12b-it_contains_pii": [[0, 3376, false], [3376, 5654, null], [5654, 8857, null], [8857, 12253, null], [12253, 15007, null], [15007, 17754, null], [17754, 20944, null], [20944, 24316, null], [24316, 28287, null], [28287, 29630, null], [29630, 31392, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3376, true], [3376, 5654, null], [5654, 8857, null], [8857, 12253, null], [12253, 15007, null], [15007, 17754, null], [17754, 20944, null], [20944, 24316, null], [24316, 28287, null], [28287, 29630, null], [29630, 31392, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31392, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31392, null]], "pdf_page_numbers": [[0, 3376, 1], [3376, 5654, 2], [5654, 8857, 3], [8857, 12253, 4], [12253, 15007, 5], [15007, 17754, 6], [17754, 20944, 7], [20944, 24316, 8], [24316, 28287, 9], [28287, 29630, 10], [29630, 31392, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31392, 0.07586]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
6b89161a6cbc754f3ee4601bf6ef0c7b4f7f6f44
|
A Case for Dynamic Sets in Operating Systems
David Steere M. Satyanarayanan
November 30, 1994
CMU-CS-94-216
Carnegie Mellon
This document has been approved for public release and sale; its distribution is unlimited.
A Case for Dynamic Sets in Operating Systems
David Steere M. Satyanarayanan
November 30, 1994
CMU-CS-94-216
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
Recent trends have exposed three key problems in today's operating systems. The first is the emergence of I/O latency as the dominant factor in the performance of many applications. The second is the need to cope with mobile communication environments where bandwidth and latency may be highly variable. The third is the importance of search activity to locating files of interest in a distributed system. In this paper we describe a single unifying abstraction called dynamic sets which can offer substantial benefits in the solution of these problems. These benefits include greater opportunity in the I/O subsystem to aggressively exploit prefetching and parallelism, as well as support for associative naming to complement the hierarchical naming in typical file systems. This paper motivates dynamic sets, presents the design of a system that embodies this abstraction, and evaluates a prototype implementation of the system via measurements and an analytical model.
This research was supported by the Air Force Materiel Command (AFMC) and ARPA under contract number F196828-93-C-0193. Additional support was provided by the IBM Corporation, Digital Equipment Corporation, and Intel Corporation.
The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of AFMC, ARPA, IBM, DEC, and Intel.
This document has been approved for public release and sale; its distribution is unlimited.
Keywords: Search, Browsing, Mobile Computing, Distributed Systems, File Systems, Prefetching, Performance, Latency, Modeling, Dynamic Sets
1. Introduction
In this paper we consider the problem of high latencies during search or browsing in a large scale distributed data repository such as AFS[13] or the World Wide Web (WWW)[1]. To solve the problem, we identify a new operating system abstraction called dynamic sets. The essence of the abstraction is the explicit grouping of sets of file accesses and the communication of this grouping by applications to the operating system. We demonstrate that this simple abstraction can have powerful performance implications across a spectrum of common scenarios.
Dynamic sets can be used to reduce the aggregate I/O latency of a group of related requests. For example, they can be used to cope with the increasing mismatch between CPU and disk speeds in high-performance computing systems. As another example, dynamic sets allow mobile clients to overlap processing of data with its access over low-bandwidth wireless links. As a third example, the explicit identification of associativity provided by dynamic sets can partially compensate for the lack of temporal locality in contexts such as search.
To obtain early validation of the potential benefits of dynamic sets, we have built a simple user-level prototype. The prototype exhibits significant performance improvements, although it is limited in scope. For example, on a suite of test programs modelling overlap of data processing with network transmission, dynamic sets reduce total elapsed time by almost a factor of four at 9600 baud. The benefits are, of course, highly dependent on the specific applications and system parameters, but the use of dynamic sets did not hurt performance in any of the cases we considered.
We have also developed an analytical performance model, and confirmed its accuracy by comparing model predictions with measurements of the prototype. The model is valuable in helping us understand the interplay of factors such as network latencies and the amount of client data processing in determining performance. It also enables us to better predict the behavior of a full implementation currently under development will exhibit.
We begin the paper with a more detailed rationale for dynamic sets. Next we describe the design for our realization of dynamic sets, then, in Section 4, we describe our prototype implementation. Section 5 derives a performance model for dynamic sets, establishes its validity, and discusses the efficacy of sets in several different arenas. We conclude with a description of work in progress, and a comparison of our work with other related research.
2. Motivation
To understand the value of dynamic sets, consider searching for data on a Unix-like file system. Although the file system interface was not explicitly designed for efficient search, it is frequently called upon to support many types of search. For instance, searching a collection of source files for a variable declaration is an everyday occurrence. More recently, the advent of browsing systems such as NCSA Mosaic enabled the construction of hypertext documents embodying numerous file system objects. In these and other similar examples, the caching embodied in most file system implementations is ineffective at masking I/O latencies — there is very little temporal locality to exploit.
Consider the execution of the simple command, `grep foo *.c`, in a typical Unix system. The wildcard `"*"` is expanded by the shell, and the application program `"grep"` is given a sequence of filenames matching the pattern `"*.c"`. Each file is successively opened, read in its entirety while searching for occurrences of `"foo"`, and then closed. Although the precise identity of files needed is determined once the wildcard expansion has been performed, this information cannot be exploited by the operating system to prefetch the files from a disk or over the network. Further, the order of file opens is fixed at wildcard expansion time although searching those files in a different order would still preserve the semantics of the command. This means that the operating system cannot reduce the overall elapsed time for the command by reordering requests to exploit differences in the I/O latencies for different files.
Of course, `grep` involves so little processing that there is little to be gained by overlapping processing with I/O. But the reduction in total latency due to reordering of file opens can still be significant if there is parallelism within the I/O subsystem. More importantly, the above example is only a representative of a common Unix programming idiom. One could envision a similar scenario using a query-by-image-content search program [8] or an interactive search
program accessing a collection of images stored as Unix files. In those cases, the processing or user think times will be large enough that overlapping them with file access will substantially reduce overall latency.
These examples reveal two distinct limitations of current Unix systems. First, knowledge of related file accesses is lost to lower levels of the system even when it is evident to higher levels of the system. Second, an ordering of such file accesses is imposed too early in their handling. Dynamic sets address both these limitations. The set abstraction allows applications to identify a related group of file accesses and allows this grouping to be exposed to lower levels of the system without imposing an unnecessary ordering.
A secondary benefit of dynamic sets is that they allow the superior scaling characteristics of hierarchical naming to be seamlessly combined with the convenient search capability of associative naming. For instance, suppose MIT and CMU maintain databases indexing their computer science technical reports stored in world-wide AFS. A user of dynamic sets might browse for a report on some topic by using grep to search through reports returned by a query run on the databases. In syntax presented below, such a query would look like "grep "distributed systems" /afs/{mit,cmu}/tr-db/select name from reports where author like "david"/".
The databases used in the above example effectively serve as "navigational databases" to better focus search in a large subtree of files. A forerunner of this idea was embodied in the Semantic File System (SFS)[4]. WAIS, gopher, and various web spiders serve a similar purpose for the WWW. We will discuss the relationship between dynamic sets and these systems in Section 7.
3. Design of SETS
A proper realization of dynamic sets must possess the following characteristics. First, the set mechanism must be lightweight to minimize unnecessary overhead. Second, the semantics should be strong enough to satisfy application requirements, but not be overly restrictive on system design. Third, the set interface should be easy to use, while allowing applications to cleanly inform lower levels of future accesses. This section describes the design of an instantiation of dynamic sets on a Unix platform. For clarity, the realization will be called "SETS". while the abstraction will continue to be "sets".
A set is a dynamically created collection of objects. "Dynamic" means a set is not persistent, as it need only exist as long as the application that created it. Preserving the results longer can not only consume resources (sets can be big), but adds no useful function to the system. We posit that searchers are interested in timely information; e.g. a user wishing to examine today's weather in a number of cities would not be satisfied if presented with the data that was obtained when he ran the search the previous week. If a user wishes to create a permanent copy of a set he could create a copy using standard file system tools.
The mutability of sets and objects raises two important issues: "What is the proper definition of set membership?" and "How current do the members of the set need to be?". These questions have been answered in the context of distributed databases (e.g. read-only transaction[3]), but we feel that these solutions are not appropriate to the large scale distributed systems such as WWW or AFS[14]. For one, many types of data do not need such strong guarantees[2,5], and two, the target systems do not provide the mechanisms necessary for these strong guarantees.
In addition, long running queries (due to large, physically dispersed data repositories) are likely in a system that spans the Internet. Maintaining tight consistency is either too restrictive (e.g. limiting partitioned access) or imposes too high a performance penalty (e.g. distributed locking). We believe that although some applications require strong consistency, users of many other applications are willing to trade these guarantees for better performance. For this reason, SETS provides a much weaker promise, captured in these two assertions.
1. Every object in the set satisfied the query at some point during its run.
2. Once an object is in the set, it will remain in the set.
Although these may seem like weak promises, they strike a balance between the needs of scalability and availability, and the need to offer useful semantics. An earlier paper sketches the space of weak consistency in this context, and contains a more detailed discussion of these issues[15].
### 3.1. SETS Interface
The operations in the SETS interface are presented in Figure 1. This subsection discusses the four most important operations: `setOpen`, `setIterate`, `setDigest`, and `setClose`. The remainder are standard set operations and will not be discussed further.
<table>
<thead>
<tr>
<th>Basic Set Functions</th>
<th>setHandle</th>
<th>setOpen( char *setPathname );</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>errorCode</td>
<td>setClose( setHandle set );</td>
</tr>
<tr>
<td></td>
<td>fileDesc</td>
<td>setIterate( setHandle set, int flags );</td>
</tr>
<tr>
<td></td>
<td>errorCode</td>
<td>setDigest( setHandle set, char *buf, int count );</td>
</tr>
<tr>
<td>Auxiliary Set Functions</td>
<td>setHandle</td>
<td>setUnion( setHandle set1, setHandle set2, int flags );</td>
</tr>
<tr>
<td></td>
<td>setHandle</td>
<td>setIntersect( setHandle set1, setHandle set2, int flags );</td>
</tr>
<tr>
<td></td>
<td>errorCode</td>
<td>setRewindIterator( setHandle set );</td>
</tr>
<tr>
<td></td>
<td>errorCode</td>
<td>setRewindDigest( setHandle set );</td>
</tr>
<tr>
<td></td>
<td>int</td>
<td>setSize( setHandle set );</td>
</tr>
<tr>
<td></td>
<td>bool</td>
<td>setMember( setHandle set, char *elem );</td>
</tr>
<tr>
<td></td>
<td>errorCode</td>
<td>setApply( setHandle set, void (*f)() );</td>
</tr>
</tbody>
</table>
Table 1: SETS system call interface
Sets are created by calling `setOpen` with a `set pathname` (using syntax explained in Section 3.2), and receiving a handle for the open set in return. The system expands the set pathname to obtain a list of names of individual objects in the set. SETS is free to determine the aggressiveness with which this expansion should be performed. In particular, `setOpen` does not require that any expansion be completed before the call returns. The system may also fetch individual objects in the set at this time, but is not required to do so. `setClose` terminates use of a set handle, allowing the system to free any resources used by the corresponding set.
SETS provides two ways of examining the contents of an open set to allow both browsing (`setDigest`) and iteration (`setIterate`). `setDigest` assumes that a summary of the objects is sufficient to guide selection, and so can avoid the cost of obtaining the members' data. `setIterate` assumes that the data of all the objects is of interest, and fetches each in turn. The distinction between these operations is similar to the difference between running "ls" and using the names or attributes to select a file vs running "grep foo *.c" and choosing a file based on the results.
`setDigest` is very lightweight, presenting only summary information about the members. The nature of the summary is dependent on the type of the members. For Unix files, the name of the file is returned by `setDigest`, while an image object summary might be a lower resolution/thumbnail presentation of the image. A special value of `errorCode` is returned to indicate that the set has not yet been fully expanded. Upon seeing this condition, applications know that future calls to `setDigest` on this set may return additional values. This is valuable to interactive applications that cannot afford to wait for the set to be fully expanded.
`setIterate` is used when accessing the contents of individual members. Each call to the iterator returns a standard Unix file descriptor for a previously unprocessed member, so a member must be at least partially fetched before it can be returned by `setIterate`. But its use allows the system to more confidently allocate resources for prefetching the member, since it is a stronger hint of future access to that member than the use of `setDigest`.
The following pseudo code is typical of the way standard Unix applications such as `grep` would use SETS.
handle = setOpen(argv[1]);
while ( fd = setIterate(handle) ) {
process(fd);
close(fd);
}
setClose(handle);
The command is invoked with the names of sets to process. Each set is opened and the members are produced using setIterate. The routine process() performs the application specific function, in the case of grep reading the file sequentially and searching for a specific string. When processing is complete, the member is closed, and upon termination of the iterator, the set is closed. As one can see, it is quite possible to modify existing applications to use sets with little or no knowledge of the details of the application.
3.2. Naming
When a set is opened, the application supplies a pathname specifying the names of the objects in the set. SETS defines three types of set specifications, an example of each being given in Figure 1. First, in an explicit specification, a user enumerates the members of the set, either using full names or pattern matching. Second, type-specific specifications are special strings that can be evaluated by servers that export a particular type of data, such as a database or the WWW. Third, executable specifications are binaries that return a list of the names of files to be put in a set. This class of set specification raises many issues, such as security and heterogeneity, which are not addressed in this paper. It is important to note that SETS only provides naming support, and does not supply executable binaries nor the type-specific services (e.g. databases). Because many Unix users are already familiar with the *wildcard* notation, we have extended this notation to include all three forms of set specification.
As seen in Figure 1, a pathname in the extended syntax can have a set specification in any component. The portion of the pathname after a specification treats the set members as directories. For instance, /coda/{cmu,mit}/staff looks for a subdirectory of /coda/cmu and /coda/mit named staff. If no such directory exists, the resulting set is the empty-set. As another example, a type-specific query applied to an object of the wrong type will also return the empty-set. A normal Unix name is just a special case of a set name that refers to only one object.
<table>
<thead>
<tr>
<th>Explicit:</th>
<th>/coda/usr/dcs/<em>src</em>/*.c</th>
</tr>
</thead>
<tbody>
<tr>
<td>Type-specific:</td>
<td>/coda/{cmu,mit}/staff/\select home where name like "%david%"</td>
</tr>
<tr>
<td>Executable:</td>
<td>/coda/sources/%myMakeDepend foo.c%</td>
</tr>
</tbody>
</table>
Because many Unix users are already familiar with the *wildcard* notation, we extended this syntax to support set specification. *Explicit specifications* use standard *wildcard* notation. *Type specific specifications* have 3 portions: the prefix identifies the object on which the query will be run (such as a database), and is terminated by the first \\"; the query is delimited by \\"\"; and the suffix which will be appended to the query's results. *Executable specifications* are similar to type specific; the prefix is the name of the working directory in which the binary will be run, and the \\"%\" delimits the command used to invoke the binary.
Figure 1: Examples of the three types of names supported by SETS.
4. Prototype Implementation
To gain an understanding of the performance implications of SETS, we have built a prototype based on a simplistic distributed file system. The prototype is a library that exports the set abstraction to applications. The file system consists of a client library and a singly threaded user-level server which exports remote procedure calls (RPCs) to open, read, write, and close files local to the machine on which the server is running. The client parses pathnames of the form /<serv>/<local-file>, redirecting I/O requests on the file to the server on machine <serv>. No
caching is done by the client library, although a file may be preread after it has been opened depending on the current activity of the client and set library. Caching at the server is limited to Unix’s disk block buffer cache.
The prototype supports the \{\} catch notation, and SQL queries as shown in example 2 in Figure 1. Any component can contain a query, although meaningless queries (such as an SQL query run on the file system root) may return the empty-set. The experiments discussed in Section 5.2 use paths like /\{serv1,serv2,serv3\}/tmp/file\{1,2,3,4\}.
The application used in the experiments is a prototypical search program. It processes all the members of a set, sequentially reading data from a file and spending a parameterized amount of time processing each byte read. The application has one thread, but the number of threads in the SETS library is specified at run-time. As in a time-sharing environment, if the application uses the CPU for more than a predetermined limit (roughly 10 msec), it is interrupted and other threads are allowed to run. To prevent SETS from blocking the application’s forward progress, the SETS threads run at a lower priority than the client thread.
4.1. Limitations of the Prototype
Several implementation decisions were orthogonal to the design of SETS, yet had negative impact on the prototype’s performance. These solutions were chosen to ease the implementation, but they also added unnecessary inefficiencies which detracted from the benefit of using dynamic sets. The kernel implementation of SETS currently in progress should avoid these costs.
First is the use off-the-shelf user-level thread and RPC packages([11]), which introduce inefficiencies that restrict the achievable parallelism. The threads package provides non preemptive coroutines, and so can only schedule a thread when the currently active thread explicitly yields. This means the entire application (and client library) are suspended when any thread makes a system call (such as a read from a socket). Additionally, the RPC package has higher client CPU overheads than a streamlined kernel implementation. This loss of parallelism has significant impact when network latencies are are low. Larger network response times tend to hide this effect.
Second, we chose to transfer data via arguments to RPCs, which limits our maximum transfer size to 3K. This means the prototype pays a substantial amount of overhead per byte. Although this effect is lower for a small file, it has a clear impact as the size of the file grows. Predictions of the performance of SETS under other conditions, such as whole file caching (as in Coda[12]) or large block transfer (as in AFS[6]), are presented in Figure 5.
Third, the prototype uses simplistic resource scheduling. For instance, contention for the CPU is managed by the thread package, and does not take contention for other resources (data, sockets) into account. Additionally, the prototype aggressively prefetches without regard to client activity. So a client demand read can be blocked by a preread for some other file on the same server. These prefetch policies will be one of the interesting issues explored in the kernel implementation of SETS.
5. Performance Model
This section presents a linear model of the aggregate latency to fetch a group of objects. The model focuses on the chief advantage of dynamic sets: the exploitation of available parallelism in common data storage systems. This parallelism takes two forms: overlapping CPU and I/O activity, and overlapping I/O through independent channels. Comparisons between the model predictions and experiments on the prototype are presented below and confirm the model to be accurate to within 10%.
To simplify the model, we assume that there is no contention between the acquiring of data and processing of other data, and that communication to independent servers will not interfere with each other. The advantage to this assumption is that it greatly simplifies our model; without it we would need to use a queuing network. The disadvantage is that such contention may reduce the benefit of using SETS, and by ignoring it the model may overestimate the benefit of
sets. However, the graphs below show that the model underestimates the benefit in almost every case, and given the relative accuracy of our predictions, the assumption seems reasonable.
In this section, we distinguish between SERIAL, the time to fetch and process a set of elements serially (as it is normally done on today's systems), and SETS, the time taken using dynamic sets. The predictions and experimental results are presented as ratios between SERIAL and SETS, both to bring focus to the key point of the study, as well as to reduce the impact of the prototype's inefficiencies on the results.
5.1. Derivation of Model
The model consists of a number of equations describing the various costs involved in fetching and processing the members of a set. The main parameters, listed below, allow the model to describe a wide variety of scenarios, some of which are discussed in Section 5.3. Other parameters used in the model (e.g. OpenCost) are implementation dependent, and are presented in the next section.
\[
\begin{align*}
\text{s} & \quad \text{The number of independent servers storing members of the set.} \\
\text{band} & \quad \text{The network throughput (bps). All routes to servers have the same bandwidth.} \\
\text{n} & \quad \text{The number of objects in the set. Members are evenly distributed across the servers.} \\
\text{size} & \quad \text{The size of the files. All the files in a set are the same size.} \\
\text{think} & \quad \text{The amount of processing/byte, either from the application or the user.}
\end{align*}
\]
In both the SERIAL and SETS cases, three kinds of work get done: opening a file (OpenCost), reading from a file (ReadCost), and processing data (think). The first two are RPCs; the basic cost of an RPC is described in equation (1). ServCost is the amount of processing on the server (different times for OpenCost and ReadCost). Null is the round trip time for an RPC with no arguments and for which ServCost = 0. These two terms comprise the latency of an RPC. The last term is the bandwidth, where Bytes is the amount of data transferred, band is a model parameter, and Slope is a constant factor containing the percentage of achievable bandwidth and various conversions (such as 8 bits/byte).
\[
\text{RPCCost} = \text{Null} + \text{ServCost} + \text{Slope} \times \frac{\text{band}}{\text{Bytes}}
\]
(1)
Equation (2) shows the amount of work done to process an open file. In the prototype, an open returns one buffer of data, and a read triggers a preread of the next buffer. Thus processing and reading of a file's data can be overlapped, saving the application the lesser of the two costs. nReads represents the number of RPCs required to read the file, and is a function of the size of the read buffer, how much data is read at open time, and the manner in which end-of-file is detected.
\[
\text{Work} = \text{max(size} \times \text{think, nReads} \times \text{ReadCost})
\]
(2)
The equation for SERIAL is fairly simple, the only overlap occurs from prereading the data in an open file. The cost for SETS is constrained as follows: before data can be processed it must be read; before a file can be processed, it must be opened. The read pipeline is started by the open fetching a buffer; the open pipeline is started by opening
the first member. However, every time an RPC is made, \( s - 1 \) other RPCs can happen in parallel. In particular, this means each RPC delay can be overlapped with the processing of the results of \( s \) previous RPCs. The last term in (5) represents the cost to drain the pipeline.
\[
\text{Serial} = n \times (\text{OpenCost} + \text{Work}) \\
\text{end} = ((n - 1) \mod s) + 1 \\
\text{Sets} = \text{OpenCost} + \frac{n - s}{s} \times \max(s \times \text{Work}, \text{OpenCost}) + \text{end} \times \text{Work}
\] (3)
The chief benefit of using dynamic sets is summarized in the second term of equation (5); effectively, \( s \) opens can be done at once. In addition, if the client is sufficiently CPU intensive (large think), network costs can be completely hidden behind client processing. We conjecture that in the future the amount of processing between demand fetches will continue to be high, due to both sophisticated applications (e.g. pattern matching, signal processing), and the reliance of flow control on human actions (such as “clicking” for the next element) by some applications. Both the model and the prototype show that these effects can be additive.
5.2. Validation of Model
The model was validated by comparing its predictions with the results of experiments on the prototype. Three experiments were run to show the accuracy of the model along different dimensions: varying the size of the set \( (n) \), the size of the files \( (size) \) in the set, and the bandwidth \( (band) \). The graphs in Figure 2 and Figure 3(a) present both the predicted (curves) and observed (points) results for the three sets of experiments, while the tables in the appendix present the means (and variance) of the observed results. A note of caution: in order to make details of the curves visible, the scale of the y-axis changes from graph to graph, although the scale of the x-axis remains the same.
The experiments were run on 5 Decstation 5000/200s (1 client/4 servers) connected by 10base2 Ethernet, running the Mach 2.6 operating system. The data files were uniformly distributed over the servers, and resided on the servers’ local disks. The data sets were small enough to fit into the servers’ buffer caches (with the exception of \( size = 100000 \)), and the experiments were run on warm server buffer caches. Each data point is the mean of 10 trials; no effort was made to restrict use of the network nor the machines during the experiments. Times were obtained via a hardware cycle counter which our measurements show to be accurate to 50 microseconds.
To obtain the RPC parameters used by the model in the previous section, we timed round trip delays to open a file and read one buffer; successive calls increased the size of the buffer from 0 bytes to 3000 bytes. The parameters obtained are: \( \text{Null} = 5.1 \) msec, \( \text{Slope} = 8221 \) (unitless), \( \text{ServCost} = 8.3 \) msec for opens, and \( \text{ServCost} = 6.4 \) msec for reads. With these numbers, our network delivers roughly 4 Mbps end-to-end.
Low bandwidth connections were emulated by a delay mechanism in the low levels of the RPC package. This mechanism multiplies the size of the packet and the desired bits per second to determine the time the packet should be delayed before being sent. Successive packets to the same machine will be additionally delayed behind earlier packets, although packets to different machines will not affect one another. Although this mechanism is an adequate predictor of actual networks[7], we suspect it added some error to our results. In particular, the delay mechanism uses the client CPU, increasing the contention for it, and reducing the amount of attainable parallelism. An effect of this contention can be seen in the discrepancy between the model and experimental results in Figure 3(a). For smaller values of \( band \), the amount of CPU required to delay packets is higher, and the observed discrepancy is larger. Unfortunately, it is too difficult to experimentally prove that the error for these curves is entirely due to the use of this mechanism.
Examining the graphs in Figures 2, and 3(a), one can see that the model closely predicts the benefit of sets over several dimensions. In fact, the average magnitude of the error is 9.5% \( (\sigma = 11.2) \). The high variance is due to three data
\(^1\)This stems from our assumption that sufficient network bandwidth is available; see Section 5.3.1
points, without which the average and variance would be 7.3% and 3.9 respectively. The three data points occur at $(\text{size} = 100000, \text{think} = 0), (\text{size} = 10000, \text{think} = 0),$ and $(n = 64, \text{think} = 0)$. The error in all three is largely due to the fact that our model uses a simplified prefetching strategy which is more restrictive than the prototype's. A more exact (but complicated) model presented in Appendix 1 significantly reduces the error of these three points, yielding average error magnitude of 8.7% ($\sigma = 5.6$).
Each curve can show potentially three distinct phases (as exemplified by the $\text{conn} = 64$ Kbps curve in Figure 3(a)). Initially the curve is flat, since processing time is completely subsumed by I/O costs. Next, a small benefit is gained by further overlapping processing with open costs. Finally, the effective benefit of sets drops off, asymptotically approaching 1.0 as processing costs become the dominant factor. For $\text{conn} = 9600$ bps, only the first phase is seen; the cost of processing never exceeds the cost of reading a buffer. For $\text{conn} = 2$ Mbps and higher, processing costs overwhelm open costs before they overwhelm read costs, thus skipping the second phase.
It should be noted that even in the worst cases, $\text{SETS}$ does provide a benefit (although the percentage benefit is small). The reduced benefit is due to the decreased contribution of I/O to overall performance; $\text{SETS}$ reduces I/O costs significantly in all the cases we examined. Thus even in the bad cases, if $\text{think}$ is human processing time, the savings from $\text{SETS}$ will be very noticeable because the latency is directly visible to the user.
5.3. Model Extensions
We now use the model to predict the benefit from using dynamic sets in several different situations. The first subsection extends the model slightly to predict the performance of dynamic sets when run on a mobile client connected to the network by a single low bandwidth link, such as a cellular modem. The following subsection then explores the impact
that a whole-file or large-block caching strategy would have on the performance of dynamic sets.
5.3.1. Weakly connected mobile hosts
A common scenario in the future will be a client connected over a weak (low bandwidth or lossy) link to the Internet, and then over various Internet routes to servers (see Figure 4). For these clients, a significant portion of communication costs will be transmission over the weak link, and since all communication must use the link serially this may have drastic effects on the benefit of dynamic sets.
Our original model is extended to include this scenario by adding a parameter, $WeakBand$, which is the bandwidth of the weak link. Since the weak link is an independent component, the cost of using it can be considered separately from the other costs. Equation (6) defines $WeakLink$ to be the total time that data is being transferred over the weak link.
$$WeakLink = n \times size \times \frac{Slope}{WeakBand}$$ (6)
Equations (7) and (8) show the costs for $SERIAL$ and $SETS$ over the weak link. In the former case, the cost of the weak link is added to the other costs since the files are processed serially. In the latter case, the use of the client CPU can be overlapped with the use of the weak link, which in turn can be overlapped with use of the $s$ servers. Unlike the
Figure 4: Example of a weakly connected mobile host.
model in Section 5.1, the client starts the pipe by sending off the first message (Start up is the client protocol overhead to send a message), drives the weak link at full utilization until the set is exhausted, and finishes up by processing the last buffer (Finish is think multiplied by the amount of data in the last buffer).
\[
\begin{align*}
Serial &= n \times (OpenCost + Work) + WeakLink \\
Sets &= Startup + \max(WeakLink, \max(n \times Work, \frac{n}{s} \times OpenCost)) + Finish
\end{align*}
\]
The graph in Figure 3(b) shows the predictions of this model. The graph uses \( n = 40, size = 1000, s = 4, \) and \( band = 2 \text{ Mbps} \). The curves present the behavior for three different weak link bandwidths. When \( \text{weakBand} = 2000000 \), the behavior is similar to that presented in Section 5.2 because communication is a small part of the overall cost. In the other two cases, different behavior is predicted. Both have a rising cost, since the weak link contributes the largest part of the latency. Since SETS can effectively overlap both client and server processing with weak link transmission, it does not pay a cost for client processing, and thus the positive slope. This is an important result, as we conjecture resource poor mobile clients connected over weak radio or telephone links will be common in the future.
5.3.2. Impact of cache block size
The benefit from SETS is maximized when a balance is achieved between the work done to open set members and work done to process them. To understand this effect, we used the model to predict performance as the buffer size is increased from 0 to 100KB. The low end corresponds to demand fetch of data, while the high end corresponds to whole-file prefetch. The graphs in Figure 5 present the benefit vs the transfer buffer size, each graph using the same model parameters as the data points (\( \text{think} = 0, s = 4, n = 40 \)) and (\( \text{think} = 100, s = 4, n = 40 \)) from Figure 2(b). For reasons of brevity, the intermediate values of \( \text{think} \) are not shown. As expected, the benefit of sets is reduced as the influence of \( \text{think} \) increases and diminishes the influence of I/O on overall performance. (Note that the scale of the y-axis is not the same for the two graphs.)
Each curve shows that maximum performance is obtained when the amount of work to fetch data is split between the open and the read phases. For small files that is equivalent to whole-file transfer. For large files, however, the high cost of fetching the file at open can delay the application. This is an example of where the fetching policy of the system can greatly affect overall performance. We hope to explore these issues, and to develop policies to allow systems to correctly adapt their strategy to current operation load.
These graphs show that the benefit of dynamic sets is partly dependent on the choice of prefetching strategy. The X-axis shows the maximum amount of data that can be transferred at a time, while the Y-axis shows the benefit of SETS. Lower X values correspond to demand fetching pages; higher values correspond to large chunk or whole file caching. As can be seen, maximum performance is obtained when the correct balance between fetching at open and pre-fetching data is obtained. In both graphs $a = 4$, $n = 40$, and $band = 4$ Mbps. Note that the scale of the y-axis is not the same for the two graphs.
Figure 5: Predicted effect of the size of the unit of transfer on the benefit of SETS
6. Work in Progress
The initial results are sufficiently promising that we are currently implementing a refined and more complete version of SETS in several versions of Unix (Mach 2.6, NETBSD, Linux) and integrating it with several types of storage systems (Coda, Informix databases, WWW). To achieve maximal performance, we have added the interface in Figure 1 to the system API, placing SETS inside the kernel in close proximity with the name resolution code. With this design choice, we should be able to achieve tight integration with multiple lower level file systems, allowing a set to span different systems.
At the time of this writing, a user of SETS can use $\texttt{csh}$ wildcard notation, Informix SQL queries, or a subset of WWW URLs to specify queries. URLs that name HTML documents are treated as queries by parsing the HTML document, identifying all links to other documents, and creating a set to hold those documents. With this, a user of SETS can use Lycos$^2$, the WebCrawler$^3$, or other WWW indexing tools to search for WWW documents, and enjoy the benefits of dynamic sets when examining the results.
With this full implementation of SETS, we hope to explore many issues that were raised by the prototype. First, how well can dynamic sets be integrated with an existing distributed file system? How will caching and dynamic sets interact in practice? Second, how difficult will it be for an implementation to achieve substantial improvement via dynamic adaptability? Third, what techniques can be exploited to increase support for mobile, weakly connected clients? Finally, how will dynamic sets perform when exposed to a real user community? Many issues of the effects of
$^2$ \url{http://lycos.cs.cmu.edu/cgi-bin/pursuit}
$^3$ \url{http://www.biotech.washington.edu/WebCrawler/WebQuery.html}
competition for and consumption of resources can only be explored when a system is supporting real user workloads.
7. Related Work
The problem of locating information in a distributed systems has received alot of attention over the past decade. There are two aspects to this problem: locating the information that satisfies a query and fetching the information for the application. The former is the domain of Information Retrieval (Salton and McGill[10] is a good introductory text). One reason so many researchers have focused on this aspect of the problem is that until now, the cost to locate the information subsumed the cost to fetch it. The rise of wide area systems and mobile clients, however, changes this picture because of the high latency to access remote data. Our work focuses on the second aspect of search and assumes the existence of reasonable indexes, and so can leverage the results of others.
There are a number of recent systems that can benefit from our techniques. The most popular of these is the WWW[1], a hypertext document that spans the Internet. As mentioned in Section 6, we are developing a system that treats URLs as queries, treating the documents pointed to by links in an HTML file as members of the result set. We hope the benefit of dynamic sets can then be made apparent to users of the Web.
Another example is the Semantic File System (SFS)[4], which attempts to provide automatic indexing of information in a file system by creating an index at a server, and updating it as files are created or modified. The SFS extends the traditional Unix pathname mechanism to support conjunctive queries over a space of name-value attribute pairs. Dynamic sets has a very different focus: that of reducing the latency seen by a client. As such, the SFS does not attempt to address the primary issues focused on here. However, one could easily envision adding dynamic sets to SFS, thus merging the benefits of both systems.
As a method for prefetching, dynamic sets are similar in nature to Transparent Informed Prefetching (TIP)[9]. In fact, dynamic sets and TIP were both inspired from the same basic problem, and dynamic sets can be thought of as a very strong form of TIP-style hints. The use of sets allows the system to prefetch, lazily fetch, and/or reoder the fetching of objects in the set, whereas TIP hints are limited to prefetching. However, TIP hints can be used to specify how a file will (probably) be read (e.g. stride width), whereas dynamic sets only inform the system that an object will (probably) be accessed. TIP and SETS could easily be integrated; for instance one could envision using TIP to specify read access patterns while iterating on a set.
8. Conclusion
The current status of our work provides preliminary confirmation of the feasibility and potential benefits of implementing dynamic sets in a Unix environment. Although our prototype is limited in many ways, it is realistic enough to give us first-hand experience in the use of dynamic sets. Our understanding of the performance benefits of dynamic sets is confirmed by the validation of our model with respect to the prototype. The more refined implementation of SETS currently under way will allow us to realize the full potential of dynamic sets.
As discussed earlier in the paper, search in various forms is an increasingly important activity in computing systems. Current file system mechanisms perform poorly in the absence of temporal locality, which is typical of search scenarios. We believe that dynamic sets will substantially improve the performance and functionality of search in distributed environments, including ones involving mobile clients.
References
A Extending model to allow aggressive read-caching
As mentioned in Section 5.1, the model presented earlier does not capture the full advantage of SETS. Since it can open a file before the file is requested by the application, SETS can also start prereading the file if network bandwidth is available. Unfortunately, this adds substantial complexity to the model, and only reduces the error when think is low. There are two reasons why it does not help more as think grows. First, the client spends more time thinking, allowing more of the reading to be done in the simple prereading scheme of SERIAL, and thus reducing the benefit of SETS. Second, I/O is a smaller percentage of overall performance, thus reducing the potential benefit of sets.
The new model allows the set mechanism to preread an open file if a server is not being used. For any file $i$, $FreeTime_i$ is the amount of time the network has been unused since that file was opened. Equation (9) defines this to be the difference between the time to process the bytes in files 1 through $i$, and the time to open file $i$ and to open and fully read files 1 through $i-1$ at $s$ servers. The symbol $-$ connote subtraction over natural numbers, i.e. if network time exceeds processing time, $FreeTime_i$ is defined to be 0.
$$FreeTime_i = i \times \text{size} \times \text{think} - \frac{\text{OpenCost} + (i-1) \times (\text{OpenCost} + n \times \text{ReadCost})}{s}$$
(9)
The time to process a file ($Work_i$) is redefined to allow some of the read work to be performed during the free time. Equation (10) shows that some number of reads may have been done before the application first examines the file, and this reduces the number of reads that have to be done during the processing of the file.
$$Work_i = \max(size \times \text{think}, (n \times \text{ReadCost} - \left[ \frac{FreeTime_i}{\text{ReadCost}} \right]) \times \text{ReadCost})$$
(10)
Equation (11), like equation (5) in the simpler model, has three terms: the cost to start the pipeline, the cost of a pass through the pipe times the number of passes, and the cost to drain the pipe (process the last $end$ members). Here we use summation instead of multiplication, since the cost of each term can change over time. Since SERIAL cannot read a file before it opens it, the equation for it remains unchanged from (3).
$$Sets = \text{OpenCost} + \sum_{i=0}^{\left[ \frac{n}{s} \right]} \max(\sum_{j=i \times s+1}^{(i+1) \times s} Work_j, \text{OpenCost}) + \sum_{j=1}^{end} Work_j$$
(11)
The overall effect of this change on the accuracy of the model is to lower the average error from 9.5% (with $\sigma = 11.2$) to 8.7% (with $\sigma = 5.6$). In particular, the error at the three most troublesome data points for the last model was substantially reduced (from 67% to at most 27%), and at only one data point was the error larger than 20%. We conjecture that the remaining error is the result of ignoring the contention; a queueing model would probably reduce the error further, but we feel the effort would not be commensurate with the greater accuracy.
B Experimental Data
Table 2, Table 3, and Table 4 present the raw data corresponding to the data points in the graphs in Figure 3(a), Figure 2(a), and Figure 2(b). Three experiments are presented: the first shows the effect of bandwidth, the second shows the effect of set size, and the third shows the effect of file sizes on the cost of processing a set of objects. Each number is the mean of 10 trials (with standard deviations in parentheses). All trials were run on warm caches. The numbers for SERIAL represent the elapsed time to fetch and process a set of elements serially, as would be done today on a typical Unix system. The numbers for SETS are the elapsed time to fetch the same set of objects using dynamic sets.
The experiments were run on the prototype described in Section 4 using 5 Decstation 5000/200s (1 client/4 servers) connected by 10base2 Ethernet, running the Mach 2.6 operating system. The data files were uniformly distributed over the servers, and resided on the servers' local disks. The data sets were small enough to fit into the servers' buffer caches (with the exception of size = 100000). To reflect the costs that real users would see, no effort was made to restrict use of the network nor the machines during the experiments. Times were obtained via a hardware cycle counter which our measurements show to be accurate to 50 microseconds.
<table>
<thead>
<tr>
<th>band</th>
<th>0</th>
<th>25</th>
<th>50</th>
<th>75</th>
<th>100</th>
</tr>
</thead>
<tbody>
<tr>
<td>9600</td>
<td>SERIAL</td>
<td>44920.3 (37.3)</td>
<td>45773.4 (129.5)</td>
<td>46887.5 (85.5)</td>
<td>48175.0 (252.1)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>11726.6 (279.1)</td>
<td>11954.7 (214.0)</td>
<td>12114.1 (84.7)</td>
<td>12706.2 (324.1)</td>
</tr>
<tr>
<td>64000</td>
<td>SERIAL</td>
<td>6281.3 (19.8)</td>
<td>7490.6 (10.4)</td>
<td>8473.4 (85.3)</td>
<td>9428.1 (92.2)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>1523.4 (22.4)</td>
<td>1759.4 (66.4)</td>
<td>2595.3 (23.7)</td>
<td>3585.9 (29.9)</td>
</tr>
<tr>
<td>200000</td>
<td>SERIAL</td>
<td>1100.0 (42.6)</td>
<td>2123.4 (13.0)</td>
<td>3143.8 (46.8)</td>
<td>4154.7 (37.9)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>414.1 (16.0)</td>
<td>1425.0 (6.3)</td>
<td>2460.9 (51.5)</td>
<td>3493.8 (67.5)</td>
</tr>
</tbody>
</table>
Table 2: Average Elapsed time to process a set for 3 values of band.
Table 2 shows how the increasing cost of network communication affects the cost to process a set of objects. Each number shows the cost to fetch and process a set of 40 objects stored on 4 servers, where each object is 1000 bytes long. The numbers were obtained using a slow-network emulator which has been shown to be an adequate predictor of actual networks[7].
<table>
<thead>
<tr>
<th>n</th>
<th>0</th>
<th>25</th>
<th>50</th>
<th>75</th>
<th>100</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>SERIAL</td>
<td>111.0 (4.7)</td>
<td>221.9 (6.3)</td>
<td>332.8 (10.0)</td>
<td>426.6 (7.2)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>59.4 (6.5)</td>
<td>157.8 (4.7)</td>
<td>262.5 (6.3)</td>
<td>360.9 (4.7)</td>
</tr>
<tr>
<td>16</td>
<td>SERIAL</td>
<td>460.9 (12.6)</td>
<td>860.9 (13.0)</td>
<td>1275.0 (26.3)</td>
<td>1665.6 (15.9)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>167.2 (7.2)</td>
<td>587.5 (31.4)</td>
<td>984.4 (9.9)</td>
<td>1390.6 (12.1)</td>
</tr>
<tr>
<td>64</td>
<td>SERIAL</td>
<td>2196.9 (665.2)</td>
<td>3562.5 (29.7)</td>
<td>5151.6 (35.0)</td>
<td>6857.8 (110.1)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>679.7 (17.5)</td>
<td>2320.3 (18.8)</td>
<td>3960.9 (51.5)</td>
<td>5607.8 (74.4)</td>
</tr>
<tr>
<td>232</td>
<td>SERIAL</td>
<td>9120.3 (236.3)</td>
<td>14728.2 (172.1)</td>
<td>21120.4 (794.2)</td>
<td>26681.3 (298.4)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>3260.9 (573.3)</td>
<td>9206.3 (257.3)</td>
<td>15090.7 (96.0)</td>
<td>21131.3 (237.4)</td>
</tr>
</tbody>
</table>
Table 3: Average Elapsed time to process set increasing size of set
Table 3 shows how the number of objects in the set affects the cost to process a set of objects. Each number shows the cost to fetch and process a set of 1000 byte objects stored on 4 servers, across an Ethernet (ours can effectively deliver data at 4 Mbps). Note how the increase in think decreases the relative benefit of sets. This is due to the increasing importance of processing time to total elapsed time; dynamic sets can only reduce I/O costs.
Table 4 shows how the size of the objects in the set affects the cost to process a set of objects. Each number shows the cost to fetch and process a set of 40 objects stored on 4 servers across an Ethernet (ours can effectively deliver data
<table>
<thead>
<tr>
<th>size</th>
<th>think time (msec/byte)</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0</td>
<td>25</td>
<td>50</td>
<td>75</td>
<td>100</td>
<td></td>
</tr>
<tr>
<td>1000</td>
<td>SERIAL</td>
<td>1315.6 (62.8)</td>
<td>2396.9 (96.9)</td>
<td>3342.2 (43.3)</td>
<td>4386.0 (124.4)</td>
<td>5961.0 (1109.7)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>489.1 (7.2)</td>
<td>1510.9 (25.2)</td>
<td>2515.6 (23.2)</td>
<td>3548.5 (35.3)</td>
<td>4632.8 (179.4)</td>
</tr>
<tr>
<td>10000</td>
<td>SERIAL</td>
<td>3490.6 (33.7)</td>
<td>12845.4 (366.4)</td>
<td>23100.1 (343.5)</td>
<td>33128.2 (556.5)</td>
<td>43209.5 (305.4)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>1542.2 (7.2)</td>
<td>11637.5 (248.2)</td>
<td>21847.0 (271.3)</td>
<td>31857.9 (309.2)</td>
<td>42012.6 (468.8)</td>
</tr>
<tr>
<td>100000</td>
<td>SERIAL</td>
<td>32348.5 (877.5)</td>
<td>121810.8 (1370.8)</td>
<td>228832.3 (3836.0)</td>
<td>323741.1 (3260.5)</td>
<td>431325.3 (8410.4)</td>
</tr>
<tr>
<td></td>
<td>SETS</td>
<td>18665.7 (152.0)</td>
<td>119126.3 (1649.0)</td>
<td>225519.1 (8904.0)</td>
<td>323006.5 (4891.8)</td>
<td>425243.8 (6351.4)</td>
</tr>
</tbody>
</table>
Table 4: Average Elapsed time to process set increasing size of members
at 4 Mbps). The effective benefit of SETS was limited by the prototype’s transfer buffer size of 4KB. Section 5.3.2 argues that this implementation choice can substantially affect the achievable benefit from using dynamic sets.
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a289354.pdf", "len_cl100k_base": 12515, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 22862, "total-output-tokens": 14281, "length": "2e13", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0004696846008300781, "__label__crime_law": 0.0003674030303955078, "__label__education_jobs": 0.00131988525390625, "__label__entertainment": 0.000133514404296875, "__label__fashion_beauty": 0.00019812583923339844, "__label__finance_business": 0.0005774497985839844, "__label__food_dining": 0.0003807544708251953, "__label__games": 0.0006299018859863281, "__label__hardware": 0.0031032562255859375, "__label__health": 0.0007872581481933594, "__label__history": 0.000507354736328125, "__label__home_hobbies": 0.00013458728790283203, "__label__industrial": 0.0005693435668945312, "__label__literature": 0.0004258155822753906, "__label__politics": 0.0002791881561279297, "__label__religion": 0.0004727840423583984, "__label__science_tech": 0.3828125, "__label__social_life": 0.00011247396469116212, "__label__software": 0.032562255859375, "__label__software_dev": 0.57275390625, "__label__sports_fitness": 0.00022780895233154297, "__label__transportation": 0.0007238388061523438, "__label__travel": 0.0002448558807373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54479, 0.06804]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54479, 0.4587]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54479, 0.9167]], "google_gemma-3-12b-it_contains_pii": [[0, 226, false], [226, 1959, null], [1959, 2098, null], [2098, 6780, null], [6780, 11062, null], [11062, 15051, null], [15051, 18851, null], [18851, 23059, null], [23059, 26359, null], [26359, 30820, null], [30820, 32927, null], [32927, 34253, null], [34253, 37138, null], [37138, 39656, null], [39656, 43493, null], [43493, 45902, null], [45902, 49000, null], [49000, 53168, null], [53168, 54479, null], [54479, 54479, null]], "google_gemma-3-12b-it_is_public_document": [[0, 226, true], [226, 1959, null], [1959, 2098, null], [2098, 6780, null], [6780, 11062, null], [11062, 15051, null], [15051, 18851, null], [18851, 23059, null], [23059, 26359, null], [26359, 30820, null], [30820, 32927, null], [32927, 34253, null], [34253, 37138, null], [37138, 39656, null], [39656, 43493, null], [43493, 45902, null], [45902, 49000, null], [49000, 53168, null], [53168, 54479, null], [54479, 54479, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54479, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54479, null]], "pdf_page_numbers": [[0, 226, 1], [226, 1959, 2], [1959, 2098, 3], [2098, 6780, 4], [6780, 11062, 5], [11062, 15051, 6], [15051, 18851, 7], [18851, 23059, 8], [23059, 26359, 9], [26359, 30820, 10], [30820, 32927, 11], [32927, 34253, 12], [34253, 37138, 13], [37138, 39656, 14], [39656, 43493, 15], [43493, 45902, 16], [45902, 49000, 17], [49000, 53168, 18], [53168, 54479, 19], [54479, 54479, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54479, 0.18455]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
11aaf4e4f3dde1504747a6f3668873fc36feb08a
|
The following full text is a preprint version which may differ from the publisher's version.
For additional information about this publication click this link.
http://hdl.handle.net/2066/199857
Please be advised that this information was generated on 2019-07-30 and may be subject to change.
AUTOMATIC TERMINATION ANALYSIS USING WANDA
CYNTHIA KOP
Department of Software Science, Radboud University Nijmegen
e-mail address: C.Kop@cs.ru.nl
Abstract. WANDA is a fully automatic termination analysis tool for higher-order term rewriting. In this paper, we will discuss WANDA’s underlying methodology. Most pertinently, this includes a higher-order dependency pair framework, weakly monotonic algebras through higher-order polynomials, and a variation of the higher-order recursive path ordering. All techniques are employed automatically using SAT encodings.
1. Introduction
Termination of term rewriting systems has been an area of active research for several decades. In recent years the field of automatically proving termination has flourished, and several strong provers have been developed to participate against each other in the annual International Termination Competition [27].
Compared to the core area of first-order term rewriting, higher-order term rewriting provides some unique challenges, for example due to bound variables. Nevertheless, the higher-order category of the termination category has seen the participation of a range of tools (HOT [2], THOR [6], WANDA), each using different techniques.
WANDA, a tool structured primarily around higher-order dependency pairs, has been the leading tool in this category since 2013. WANDA was also the tool of choice as a termination back-end in the higher-order category of the International Confluence Competition [7], with both participating tools in 2016 (ACPH [25] and CSI’ho [23]) delegating termination questions to WANDA.
In this paper we will discuss the most important techniques used in WANDA. To this end we follow roughly the structure of an analysis by WANDA: first a higher-order TRS is read and (if necessary) translated into WANDA’s internal formalism, AFMSs (§2); then basic techniques for non-termination (§3) and for simple termination proofs using reduction pairs (§4) are applied. Finally, responsibility is passed to the dependency pair framework (§5).
2. Higher-order term rewriting using AFMSs
Unlike first-order term rewriting, there is no single, unified approach to higher-order term rewriting, but rather a number of similar but not fully compatible systems aiming to combine term rewriting and typed λ-calculi. To support (non-)termination proofs in several popular formalisms at once, WANDA uses her own internal format, Algebraic Functional Systems with
Meta-variables. AFSMs are essentially simply-typed CRSs [16] and also largely correspond to the formalism in [4]; they are fully explained in [19, Ch. 2] and in [10]. We here present an overview which assumes familiarity with term rewriting and simply-typed lambda-calculus.
Terms are built from a set of simply-typed variables $\mathcal{V}$ and a set $\mathcal{F}$ of simply-typed function symbols, using abstraction and application to form well-typed expressions. Meta-terms are built from a set of simply-typed variables $\mathcal{V}$ and a set $\mathcal{M}$ of meta-variables, each equipped with a type declaration $[\sigma_1 \times \cdots \times \sigma_k] \rightarrow \tau$, using abstraction, application and meta-variable application $Z[s_1, \ldots, s_k]$ (where $Z : [\sigma_1 \times \cdots \times \sigma_k] \rightarrow \tau \in \mathcal{M}$ and each $s_i : \sigma_i$). Meta-variables are not used as $\lambda$-binders. We denote $FMV(s)$ for the set of meta-variables occurring in $s$.
A substitution $\gamma$ is a partial function mapping variables $x : \sigma$ to terms of type $\sigma$ and meta-variables $Z : [\sigma_1 \times \cdots \times \sigma_k] \rightarrow \tau$ to terms $\lambda x_1 \cdots x_k.t$ of type $\sigma_1 \rightarrow \cdots \rightarrow \sigma_k \rightarrow \tau$. For a meta-term $s$ whose meta-variables all occur in the domain of $\gamma$, we let $s\gamma$ denote $s$ with occurrences of variables $x$ in the domain of $\gamma$ replaced by $\gamma(x)$, and $Z[s_1, \ldots, s_k]$ by $t[x_1 := s_1\gamma, \ldots, x_k := s_k\gamma]$ if $\gamma(Z) = \lambda x_1 \cdots x_k.t$; here, $[x_1 := s_1\gamma, \ldots, x_k := s_k\gamma]$ is the substitution mapping each $x_i$ to $s_i\gamma$.
Rules are pairs $\ell \Rightarrow r$ of meta-terms of the same type, such that $FMV(r) \subseteq FMV(\ell)$; both sides are closed (all their variable occurrences are bound) and $\ell$ is a pattern: for all sub-meta-terms of $\ell$ which have the form $Z[s_1, \ldots, s_k]$ $t_1 \cdots t_n$ necessarily $n = 0$ and $s_1, \ldots, s_k$ are distinct variables. A set of rules $\mathcal{R}$ defines a rewrite relation $\Rightarrow_{\mathcal{R}}$ as the smallest monotonic relation on terms that contains all pairs $(\ell\gamma, r\gamma)$ with $\ell \Rightarrow r \in \mathcal{R}$ and $\gamma$ a substitution on domain $FMV(\ell)$.
Meta-variables are used in early forms of higher-order rewriting such as Aczel’s Contraction Schemes [1] and Klop’s Combinatory Reduction Systems [16]. They strike a balance between matching modulo $\beta$-reduction and syntactic matching. Essentially, applying a substitution on a meta-term can be seen as computing a $\beta$-development. For example $d \ (\lambda x. \sin (Z[x]))[Z := \lambda y. \text{plus} \ y \ x]$ evaluates to $d \ (\lambda z. \sin (\text{plus} \ z \ x))$, and $Z[0, \text{nil}][Z := \lambda xy. \text{plus} \ x \ (\text{len} \ y)]$ to $\text{plus} \ 0 \ (\text{len} \ \text{nil})$.
Example 1. Let $\mathcal{F} \supseteq \{0 : \text{nat}, \ s : \text{nat} \rightarrow \text{nat}, \ \text{nil} : \text{list}, \ \text{cons} : \text{nat} \rightarrow \text{list} \rightarrow \text{list}, \ \text{map} : (\text{nat} \rightarrow \text{nat}) \rightarrow \text{list} \rightarrow \text{list} \}$ and consider the following rules:
\[
\begin{align*}
\text{map} \ (\lambda x. Z[x]) \ \text{nil} & \Rightarrow \ \text{nil} \\
\text{map} \ (\lambda x. Z[x]) \ (\text{cons} \ H \ T) & \Rightarrow \ \text{cons} \ Z[H] \ (\text{map} \ (\lambda x. Z[x]) \ T)
\end{align*}
\]
Then $\text{map} \ (\lambda x. 0) \ (\text{cons} \ 0 \ (\text{cons} \ (s \ 0 \ \text{nil}))) \Rightarrow_{\mathcal{R}} \text{cons} \ 0 \ ((\text{map} \ (\lambda x. 0) \ (\text{cons} \ (s \ 0 \ \text{nil})))) \Rightarrow_{\mathcal{R}} \text{cons} \ 0 \ (\text{cons} \ 0 \ \text{nil})$. Note that the bound variable $x$ does not need to occur in the body of $\lambda x. 0$ to match $\lambda x. Z[x]$. It is allowed to occur, though: $\text{map} \ (\lambda x. s \ (x)) \ (\text{cons} \ 0 \ (\text{cons} \ (s \ 0 \ \text{nil})))$ reduces in three steps to $\text{cons} \ (s \ (s \ (0))) \ (\text{cons} \ (s \ (s \ (0)))) \ (\text{cons} \ (s \ (s \ (0))))$.
Example 2. In Example 1, a term $\text{map} \ s \ (\text{cons} \ 0 \ \text{nil})$ can not be reduced, because $s$ does not instantiate $\lambda x. Z[x]$. We could alternatively consider the rules:
\[
\begin{align*}
\text{map} \ Z \ \text{nil} & \Rightarrow \ \text{nil} \\
\text{map} \ Z \ (\text{cons} \ H \ T) & \Rightarrow \ \text{cons} \ (Z[H]) \ (\text{map} \ Z \ T)
\end{align*}
\]
Here, $Z$ has a type declaration $[] \rightarrow \text{nat} \rightarrow \text{nat}$ instead of $[\text{nat}] \rightarrow \text{nat}$, and the second rule employs explicit application. Then $\text{map} \ s \ (\text{cons} \ 0 \ \text{nil}) \Rightarrow_{\mathcal{R}} \text{cons} \ (s \ 0) \ (\text{map} \ s \ \text{nil})$. However, we may need explicit $\beta$-reductions; e.g., $\text{map} \ (\lambda x. s \ x) \ (\text{cons} \ 0 \ \text{nil}) \Rightarrow_{\mathcal{R}} \text{cons} \ (\lambda x. s \ x) \ (\text{map} \ (\lambda x. s \ x) \ (\text{cons} \ 0 \ \text{nil})) \Rightarrow_{\beta} \text{cons} \ (s \ 0) \ (\text{map} \ (\lambda x. s \ x) \ (\text{cons} \ 0 \ \text{nil}))$.
Following [19, §2.3.1] and [18, §7], uncurrying does not affect termination provided the rules are (essentially) unchanged. That is, let \textit{arity}(f) denote the largest number \(k\) such that (1) \(f\) can be applied to at least \(k\) arguments, and (2) every occurrence of \(f\) in \(R\) is applied to at least \(k\) arguments. Then to prove termination it suffices to show that there is no infinite reduction \(s_1 \Rightarrow_R s_2 \Rightarrow_R \ldots\), where, in every term \(s_i\), all symbols \(f\) always occurs with at least \(\text{arity}(f)\) arguments. In Example 1, \(\text{arity}(s) = 1\) and \(\text{arity}(\text{cons}) = \text{arity}(\text{map}) = 2\); thus, we do not need to consider terms such as \(\text{map} \ s \ (\text{cons} \ 0 \ \text{nil})\) or \(\text{map} \ (\lambda x. s) \ x\) for termination. Arities are essential in various techniques (see, e.g., §4).
**Input to WANDA.** WANDA is written for Linux, and can be invoked using:
\[
> ./wanda.exe <filename>
\]
Filenames describing an AFSM should have extension `.afsm` and list first all function symbols with their types (each on an individual line) and then the rules, after an empty line as separator. Arities and types of meta-variables do not need to be given, as these are automatically derived. Types of bound variables may be given in the binder, but can typically also be derived from context.
**Example 3.** Example 1 can be described to WANDA as follows.
\[
nil : \text{list} \\
\text{cons} : \text{nat} \rightarrow \text{list} \rightarrow \text{list} \\
\text{map} : (\text{nat} \rightarrow \text{nat}) \rightarrow \text{list} \rightarrow \text{list}
\]
\[
\text{map} \ (\forall \ x. \text{nat}. \text{Z}[x]) \ \text{nil} \Rightarrow \text{nil} \\
\text{map} \ (\forall \ x. \text{nat}. \text{Z}[x]) \ (\text{cons} \ H \ T) \Rightarrow \text{cons} \ \text{Z}[H] \ (\text{map} \ Z \ T)
\]
WANDA automatically derives arity 2 for both \text{cons} and \text{map}. Removing the :nat part in the rules does not affect the analysis, as this typing is clear from context.
The formalism used in the termination competition [27] – “higher-order rewriting union beta”, which I will typically refer to as Algebraic Functional Systems (AFSs) – is very similar to AFSMs, but uses variables for matching rather than meta-variables; this gives rules like those in Example 2. WANDA can read such systems either in the competition’s .xml format or in her own human-readable presentation of this same format (.afs), and translates them to AFSMs by applying the transformations of [18] and then replacing all free variables \(x : \sigma\) in rules by a corresponding meta-variable \(X : [] \rightarrow \sigma\).
The use of meta-variables for matching allows for the representation of rules such as \(d \ (\lambda x. \sin \ Z[x]) \Rightarrow \lambda x. (d \ (\lambda y. \text{Z}[y]) \ x) \times (\cos \ x)\), which have no counterpart in AFSs. This makes it possible to encode \textit{pattern HRSs} [22, 24] and also \textit{CRSs} [16] into AFSMs. It is future work to include such translations directly into WANDA.
Thus, the first step of a termination analysis in WANDA is to read the input: either an AFSM whose types are derived, or an AFS which is simplified and translated to the AFSM formalism. The second step is to derive arities, which are saved for later usage. When this is done, control is passed to the non-termination module.
3. **Non-termination**
Although the focus in her development has been on termination, WANDA has a basic checker with two tests to quickly identify simple cases of non-termination.
Obvious loops. An AFSM is clearly non-terminating if there is a reduction \( s \Rightarrow^* \tau t \) such that an instance \( \gamma \) of \( s \) occurs as a subterm of \( t \). To discover such loops, WANDA takes a rule left-hand side, replaces meta-variable applications \( Z[x_1, \ldots, x_k] \) by variable applications \( y \cdot x_1 \cdots x_k \), and performs a breadth-first search on reducts, not going beyond the first thousand reducts. This simple method will not find any sophisticated counterexamples for termination, but is quick and easy, and often catches mistakes in a recursive call.
Instances of \( \omega \omega \). Non-termination of the untyped \( \lambda \)-calculus may be demonstrated with the self-reducing term \( \omega \omega \), where \( \omega = \lambda x. xx \). A higher-order variation of this example is given by rules such as \( f (c \ Z) \ X \Rightarrow Z \ X \) with \( c : \sigma \rightarrow \sigma \), where a function \( (Z) \) is taken out of a lower-order context: here, if we let \( \omega := c (\lambda x. f \ x \ x) \), we have \( f \ \omega \ \omega \Rightarrow^\beta (\lambda x. f \ x \ x) \ \omega \Rightarrow^\beta f \ \omega \ \omega \).
We can generalise the idea as follows. A context is a meta-term \( C[\square_1, \ldots, \square_n] \) containing \( n \) typed holes \( \square_i \), and \( C[s_1, \ldots, s_n] \) denotes the same meta-term with each \( \square_i \) replaced by \( s_i \). WANDA identifies rules \( \ell \Rightarrow r \) where \( \ell \) has the form \( C[D[Z], X] \) such that: (1) \( Z : [] \rightarrow \sigma_1 \rightarrow \ldots \rightarrow \sigma_n \rightarrow \tau \in \mathcal{M} \), with \( \tau \) the type of \( \ell \); (2) there is \( i \) such that \( D[Z] \) has type \( \sigma_i \); and \( r \) can be written as \( E[Z \ x_1 \cdots X_i \cdots X_k] \) with \( X_i : [] \rightarrow \sigma_i \in \mathcal{M} \); and (3) \( X \) and \( Z \) do not appear at other positions in \( C \) or \( D \). When such a rule is detected, WANDA uses it to construct a non-terminating term. She also looks for certain variations of this shape which consider meta-variables with more than 0 arguments.
Aside from these checks, the dependency pair framework employs a first-order termination tool to detect loops in the first-order rules of the AFSM as part of the DP framework (§5).
4. Rule removal
WANDA’s first step for proving termination is rule removal. The basic principle is simple: if we can identify a well-founded term ordering \( \succ \) such that \( s \succeq t \) whenever \( s \Rightarrow^\tau t \), and \( s \succ t \) when a certain rule is used, then that rule cannot be an integral part of an infinite reduction, so can safely be removed – making the termination problem simpler. Rule removal is not necessary within WANDA (disabling it does not lose any benchmarks), but often leads to shorter runtimes and simpler proofs.
In practice, we do not use a single well-founded ordering but a reduction pair:
Definition 4. A reduction pair is a pair \((\succeq, \succ)\) of a quasi-ordering and a well-founded ordering on meta-terms of the same type, such that:
- \( \succeq \) and \( \succ \) are compatible: \( \succ \cdot \succeq \) is included in \( \succ \);
- \( \succeq \) and \( \succ \) are meta-stable: if \( s \succeq t \) and \( \gamma \) is a substitution on domain \( \text{FMV}(f \ s_1 \cdots s_n) \cup \text{FMV}(t) \), then \( s\gamma \succeq t\gamma \) (and similar for \( \succ \));
- \( \succeq \) is monotonic: if \( s \succeq t \), then \( s u \succeq t u \), \( u s \succeq u t \) and \( \lambda x.s \succeq \lambda x.t \)
- \( \succeq \) contains beta: \( (\lambda x.s) \ t \succeq s[x := t] \) if \( s \) and \( t \) are terms.
A reduction pair is strongly monotonic if moreover \( \succ \) is monotonic.
Reduction pairs also play a large role in the dependency pair framework (§5); there, strong monotonicity is not required. However, depending on the query there may be additional requirements, such as \( f \ X \succeq X_i \) for some of the symbols \( f \).
WANDA has two ways to generate reduction pairs: weakly monotonic interpretations and recursive path orderings. Both techniques extend first-order methods, and are most powerful when arity is taken into account. To do this in the most natural way, WANDA
implicitly converts meta-terms which respect the \textit{arity} function into \textit{functional} notation, where applications are removed as follows:
\[
\begin{align*}
\text{uncurry}(x) &= x & \text{if } x \text{ is a variable} \\
\text{uncurry}(\lambda x.s) &= \lambda x.\text{uncurry}(s) \\
\text{uncurry}(Z[s_1, \ldots, s_k]) &= Z[\text{uncurry}(s_1), \ldots, \text{uncurry}(s_k)] \\
\text{uncurry}(f s_1 \cdots s_k) &= f(\text{uncurry}(s_1), \ldots, \text{uncurry}(s_k)) & \text{if } k = \text{arity}(f) \\
\text{uncurry}(s t) &= @^{(\sigma, \tau)}(\text{uncurry}(s), \text{uncurry}(t)) & \text{if } s : \sigma \rightarrow \tau \\
&\quad \text{and } s \text{ does not have the form } f s_1 \cdots s_\text{arity}(t)-1
\end{align*}
\]
Essentially, the rules are uncurried and applications replaced by explicit symbols. For \(f : \sigma_1 \rightarrow \cdots \rightarrow \sigma_k \rightarrow \tau\), we consider \(f(s_1, \ldots, s_k)\) as a (meta-)term of type \(\tau\).
\textbf{Example 5.} The uncurried version of the AFSM in Example 1 is:
\[
\begin{align*}
\text{map}(\lambda x.Z[x], \text{nil}) &\Rightarrow \text{nil} \\
\text{map}(A x.Z[x], \text{cons}(H, T)) &\Rightarrow \text{cons}(Z[H], \text{map}(\lambda x.Z[x], T))
\end{align*}
\]
The second rule in Example 2 is uncurried to:
\[
\text{map}(Z, \text{cons}(H, T)) \Rightarrow \text{cons}(\text{@}^{(\text{nat}, \text{nat})}(Z, H), \text{map}(Z, T))
\]
4.1. \textbf{Weakly monotonic algebras.} The idea of van de Pol’s \textit{weakly monotonic algebras} [26] is to assign valuations which map all function symbols \(f\) of type \(\sigma\) to a \textit{weakly monotonic functional} \(J_f\): an element of \([\sigma]\), where \([i]\) is the set of natural numbers for a base type \(i\) and \([\sigma \rightarrow \tau]\) is the set of those functions from \([\sigma]\) to \([\tau]\) that are weakly monotonic (i.e., if \(a, b \in [\sigma]\) and \(a \geq b\), then \(f(a) \geq f(b)\) for \(f \in [\sigma \rightarrow \tau]\), where \(\geq\) is a point-wise comparison).
This induces a value on closed terms, which can be extended to a reduction pair, as follows.
Given a meta-term \(s\) in functional notation and a function \(\alpha\) which maps each variable \(x : \sigma\) occurring freely in \(s\) to an element of \([\sigma]\) and each meta-variable \(Z : [\sigma_1 \times \cdots \times \sigma_n] \rightarrow \tau\) to an element of \([\sigma_1 \rightarrow \cdots \rightarrow \sigma_n \rightarrow \tau]\), we let \(s^\alpha\) be recursively defined as follows:
\[
\begin{align*}
[x]^\alpha &= \alpha(x) \\
[\lambda x.s]^\alpha &= u \mapsto [s]^\alpha[x:=u] \\
[Z[s_1, \ldots, s_k]^\alpha] &= \alpha(Z)([s_1]^\alpha, \ldots, [s_k]^\alpha)
\end{align*}
\]
For closed meta-terms \(\ell, r\), let \(\ell \succ r\) if \([\ell]^\alpha > [r]^\alpha\) for all \(\alpha\), and \(\ell \succ \succ r\) if \([\ell]^\alpha \succeq [r]^\alpha\) for all \(\alpha\). Then \((\succ, \succ\succ)\) is a reduction pair if the valuations \(J_{\alpha, \sigma, \tau}\) are chosen to have \(J_{\alpha, \sigma, \tau}(F, X) \succ F(X)\). It is a strongly monotonic pair if each \(J_f\) (including all \(J_{\alpha, \sigma, \tau}\)) is monotonic over \(\succ\) in the first \textit{arity}(\(f\)) arguments.
In [12], a strategy is discussed to find interpretations based on \textit{higher-order polynomials} for AFSs, and an automation using encodings of the ordering requirements into SAT. WANDA implements this methodology, only slightly adapted to take meta-variables into account.
\textbf{Example 6.} Let \(J_{\text{nil}} = 0\) and \(J_{\text{cons}} = (n, m) \mapsto n + m + 1\) and \(J_{\text{map}} = (f, n) \mapsto n f(n) + 2 n + f(0)\) and \(J_{\alpha, \sigma, \tau} = (f, n) \mapsto f(n) + n\). Then, writing \(F := \alpha(Z), n := \alpha(H), m := \alpha(T)\):
\[
\begin{align*}
&\bullet [\text{map}(Z, \text{nil})]^\alpha = F(0) \geq 0 = [\text{nil}]^\alpha \\
&\bullet [\text{map}(F, \text{cons}(n, m))]^\alpha = (n + m + 1) \cdot F(n + m + 1) + 2 \cdot (n + m + 1) + F(0) > \\
&\quad (F(n) + n) + (m \cdot F(m) + 2 \cdot m + F(0)) + 1 = [\text{cons}(\text{@}^{(\text{nat}, \text{nat})}(Z, H), \text{map}(Z, T))]^\alpha
\end{align*}
\]
Also, \(J_{\alpha, \sigma, \tau}(F, n) = F(n) + n \geq F(n)\), and we can similarly choose all \(J_{\alpha, \sigma, \tau}\), so that \(\Rightarrow_\beta\) is included in \(\succ\). As all \(J_f\) are strictly monotonic in all arguments, we may remove the second rule from Example 2.
4.2. StarHorpo. The recursive path ordering [8] is a syntactic method to extend an ordering on function symbols to an ordering on first-order terms. There are various extensions of RPO (e.g. [9, 15]) including a several higher-order variations (e.g. [5, 14]). WANDA uses her own definition, based on iterative path orders [17], which works well with meta-variables and (unlike older HORPOs) is natively transitive.
Following [17], StarHorpo employs a star mark * to indicate a decrease; intuitively, f_i^*(s_1, ..., s_k) indicates an upper bound for all functional meta-terms of type σ which are strictly smaller than f(s_1, ..., s_k). Let s^* denote λx_1 ... x_n.f_i^*(s_1, ..., s_k) if s = λx_1 ... x_n.f(s_1, ..., s_k). If s has any other form, then s^* is undefined.
StarHorpo assumes a precedence ▶: a quasi-ordering on all symbols, whose strict part ▶ is well-founded; we let ≈ denote the equivalence relation ▶ ∩ ◀. We assume that there is a special symbol, ⊥, which is minimal for ▶ (i.e., f ▶ ⊥ for all f). All symbols are assigned a status in {Lex, Mul}, and let ▶_σ denotes either the lexicographic or multiset extension of ▶, depending on the status of f. Then (≥, ▶_*) is given by the following rules:
\[
\begin{align*}
(\forall) & \quad s \geq^* t \quad \text{if} \ s^* \geq^* t \\
(\text{Var}) & \quad x \geq^* x \quad \text{if} \ x \in V \\
(\text{Abs}) & \quad \lambda x.s \geq^* \lambda x.t \quad \text{if} \ s \geq^* t \\
(\text{Meta}) & \quad Z[s_1, ..., s_k] \geq^* Z[t_1, ..., t_k] \quad \text{if} \ each \ s_i \geq^* t_i \\
(\text{Fun}) & \quad f(s_1, ..., s_n) \geq^* g(t_1, ..., t_k) \quad \text{if} \ f \approx g \text{ and } [s_1, ..., s_n] \geq^f t_1, ..., t_k \\
(\text{Put}) & \quad f(s_1, ..., s_n) \geq^* t \quad \text{if} \ f^*_i(s_1, ..., s_n) \geq^* t \text{ (for } f(s) : \sigma) \\
(\text{Select}) & \quad f^*_\delta(s_1, ..., s_n) \geq^* t \quad \text{if} \ s_\delta(f^*_1(s), ..., f^*_\delta(s) \geq^* t \text{ where } s_\delta : \tau_1 \rightarrow \ldots \rightarrow \tau_j \rightarrow \sigma \\
(\text{FAbs}) & \quad f^*_\delta(t_1, ..., t_n) \geq^* \lambda x.t \quad \text{if} \ f^*_\delta(s_1, ..., s_n, x) \geq^* t \\
(\text{Copy}) & \quad f^*_\delta(s_1, ..., s_n) \geq^* g(t_1, ..., t_k) \quad \text{if} \ f \triangleright g \text{ and } f^*_\delta(s) \geq^* t_i \text{ for } 1 \leq i \leq k \\
(\text{Stat}) & \quad f^*_\delta(s_1, ..., s_n) \geq^* g_\sigma(t_1, ..., t_k) \quad \text{if} \ f \approx g \text{ and } f^*_\delta(s) \geq^* t_i \text{ for } 1 \leq i \leq k \\
(\text{Bot}) & \quad s \geq^* \bot_\sigma \quad \text{if} \ s : \sigma
\end{align*}
\]
Note that ≥ and ▶ only compare terms of the same type, and that marked symbols f^* may occur with different types (indicated as subscripts) within a term. Symbols f^* may also have different numbers of arguments, but must always have at least arity(f). The notation s(t_1, ..., t_n) indicates an “application”: s(t) = s, (λx.s)(t, u) = s[x := t](u) and f(s)(t, u) = f^*_\delta(s)(t, \bar{u}) = f^*_\delta(s, t)(\bar{u}). Moreover, as part of StarHorpo, function symbols may have some of their arguments permuted or (if strong monotonicity is not required) filtered away; symbols with no remaining arguments may be mapped to $\bot_\sigma$ for a suitable $\sigma$.
The full explanation of these rules is available in [19, Chapter 5].
Example 7. To see that ⇒β is included in ≥_, note that ≥_ is monotonic by (Fun), (FAbs) and (Meta), and we can derive: @\[σ,τ\](λx.Z[x],Y) ≥_ Z[Y] by (Put), because @\[σ,τ\]^*(λx.Z[x],Y) ≥_ Z[Y] by (Select), because Z[\[σ,τ\]^*(λx.Z[x],Y)] ≥_ Z[Y] by (Meta), because @\[σ,τ\]^*(λx.Z[x],Y) ≥_ Y by (Select), because Y ≥_ Y by (Meta).
WANDA combines the search for a suitable precedence and status function with the search for a permutation and filtering, using a SAT encoding following [19, Chapter 8.6].
5. The higher-order dependency pair framework
If any rules remain after rule removal, WANDA passes them on to the dependency pair framework. Like the first-order DP framework [13], it is an extendable framework for termination and
non-termination, which new termination methods can easily be plugged into in the form of processors. The DP framework is detailed in \[10\]. We here consider the high-level steps.
**Delegation to a first-order prover.** Following \[11\], the first-order rules in the AFSM are identified and passed to an external first-order termination tool. If this tool detects non-termination and returns a counterexample that can be typed (or if the AFSM is orthogonal, in which case the typing of the first-order part is irrelevant for its termination), \textsc{WANDA} concludes non-termination. If the first-order prover concludes termination, then all dependency pairs for these first-order rules are omitted for the remainder of the framework.
**Static and Dynamic DPs.** There are two approaches to generate dependency pairs, originating from distinct lines of work around the same period \[20, 21\]. In both cases, a set DP of “dependency pairs” (a kind of rewrite rules) is generated, and termination follows if there is no infinite chain \(s_1, s_2, \ldots\) with each \(s_i \Rightarrow_{\text{DP}} \Rightarrow_{R}^* s_{i+1}\). Here, steps using \(\Rightarrow_{\text{DP}}\) may only be applied at the root of a term, and steps using \(\Rightarrow_{R}\) only in argument positions. A \(DP\) problem is the question whether a chain of a certain form exists, and \(DP\) processors simplify \(DP\) problems into easier ones – for example by removing \(DPs\) using a reduction pair.
The dynamic approach is always applicable, and could in theory be used also to prove non-termination – although in \textsc{WANDA} this is not yet done. The static approach is only applicable to AFSMs which pass certain restrictions, and may admit infinite chains even when the AFSM is terminating. However, proofs using the static approach are typically much simpler, since it does not generate “collapsing” DPs (of a form such as \(f \ell_1 \cdots \ell_n \Rightarrow Z[s_1]s_m\)).
Despite their differences, the same processors apply to static and dynamic DPs; the only difference is in their generation and whether the initial DP problem can be used to prove non-termination. Thus, \textsc{WANDA} uses the following strategy.
```plaintext
if not static_applicable(R) then return framework(static_DPs, R);
else if static_DPs \subseteq dynamic_DPs then return framework(static_DPs, R);
else let tmp = framework(dynamic_DPs, R) in
if tmp = YES or tmp = NO then return tmp;
else return framework(static_DPs, R);
```
**6. Conclusions and future work**
Overall, \textsc{WANDA} takes an input file describing an AFSM (or an AFS), performs an analysis following Sections 3–5 and then prints \texttt{YES} (a termination proof was found), \texttt{NO} (a non-termination proof was found) or \texttt{MAYBE} (neither could be proved). In the first two cases, this is followed by a human-readable proof.
There are many directions for improvement. Most pertinently, due to the presence of a large database of termination benchmarks in the competition format \[28\], \textsc{WANDA} has been optimised for AFSs and is decidedly weak in the presence of meta-variables with arguments. Moreover, non-termination analysis is very limited and does not take advantage of the DP framework. Other improvements could be to further extend first-order termination techniques, and build on primarily higher-order techniques like sized types \[3\].
A complete discussion of most techniques in \textsc{WANDA} and the technology behind automating them is available in the author’s PhD thesis \[19\]. \textsc{WANDA} is open-source and available from:
\texttt{http://wandahot.sourceforge.net/}
REFERENCES
[10] C. Fuhs and C. Kop. The unified higher-order dependency pair framework. TODO.
|
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/199857/199857.pdf?sequence=1", "len_cl100k_base": 8572, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 33278, "total-output-tokens": 10734, "length": "2e13", "weborganizer": {"__label__adult": 0.0004322528839111328, "__label__art_design": 0.000560760498046875, "__label__crime_law": 0.0005412101745605469, "__label__education_jobs": 0.001926422119140625, "__label__entertainment": 0.00018358230590820312, "__label__fashion_beauty": 0.00021529197692871096, "__label__finance_business": 0.0005097389221191406, "__label__food_dining": 0.000522613525390625, "__label__games": 0.0008988380432128906, "__label__hardware": 0.000934600830078125, "__label__health": 0.0009832382202148438, "__label__history": 0.00046133995056152344, "__label__home_hobbies": 0.00015473365783691406, "__label__industrial": 0.0008535385131835938, "__label__literature": 0.0007295608520507812, "__label__politics": 0.0006012916564941406, "__label__religion": 0.0008840560913085938, "__label__science_tech": 0.26904296875, "__label__social_life": 0.0002161264419555664, "__label__software": 0.01171112060546875, "__label__software_dev": 0.7060546875, "__label__sports_fitness": 0.00039887428283691406, "__label__transportation": 0.0007371902465820312, "__label__travel": 0.00023627281188964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31717, 0.02028]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31717, 0.57102]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31717, 0.75981]], "google_gemma-3-12b-it_contains_pii": [[0, 294, false], [294, 2757, null], [2757, 8015, null], [8015, 11626, null], [11626, 15964, null], [15964, 20442, null], [20442, 24526, null], [24526, 28174, null], [28174, 31717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 294, true], [294, 2757, null], [2757, 8015, null], [8015, 11626, null], [11626, 15964, null], [15964, 20442, null], [20442, 24526, null], [24526, 28174, null], [28174, 31717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31717, null]], "pdf_page_numbers": [[0, 294, 1], [294, 2757, 2], [2757, 8015, 3], [8015, 11626, 4], [11626, 15964, 5], [15964, 20442, 6], [20442, 24526, 7], [24526, 28174, 8], [28174, 31717, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31717, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
6dcf7ad1a8a0aaac581cccbeafcddb25ab882b46
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01352967/file/version_finale.pdf", "len_cl100k_base": 9013, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 45626, "total-output-tokens": 10961, "length": "2e13", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0005030632019042969, "__label__crime_law": 0.0004916191101074219, "__label__education_jobs": 0.00160980224609375, "__label__entertainment": 0.00015056133270263672, "__label__fashion_beauty": 0.00023317337036132812, "__label__finance_business": 0.0005908012390136719, "__label__food_dining": 0.0004057884216308594, "__label__games": 0.0006785392761230469, "__label__hardware": 0.0008625984191894531, "__label__health": 0.0007047653198242188, "__label__history": 0.0004978179931640625, "__label__home_hobbies": 0.00012969970703125, "__label__industrial": 0.0005545616149902344, "__label__literature": 0.0005502700805664062, "__label__politics": 0.00041794776916503906, "__label__religion": 0.0005774497985839844, "__label__science_tech": 0.2032470703125, "__label__social_life": 0.0001804828643798828, "__label__software": 0.033843994140625, "__label__software_dev": 0.75244140625, "__label__sports_fitness": 0.0002703666687011719, "__label__transportation": 0.0005979537963867188, "__label__travel": 0.00028252601623535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41020, 0.05267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41020, 0.50575]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41020, 0.87063]], "google_gemma-3-12b-it_contains_pii": [[0, 1120, false], [1120, 3890, null], [3890, 6186, null], [6186, 9074, null], [9074, 11988, null], [11988, 14755, null], [14755, 17145, null], [17145, 20325, null], [20325, 23141, null], [23141, 25642, null], [25642, 28997, null], [28997, 32417, null], [32417, 33884, null], [33884, 37191, null], [37191, 37435, null], [37435, 41020, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1120, true], [1120, 3890, null], [3890, 6186, null], [6186, 9074, null], [9074, 11988, null], [11988, 14755, null], [14755, 17145, null], [17145, 20325, null], [20325, 23141, null], [23141, 25642, null], [25642, 28997, null], [28997, 32417, null], [32417, 33884, null], [33884, 37191, null], [37191, 37435, null], [37435, 41020, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41020, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41020, null]], "pdf_page_numbers": [[0, 1120, 1], [1120, 3890, 2], [3890, 6186, 3], [6186, 9074, 4], [9074, 11988, 5], [11988, 14755, 6], [14755, 17145, 7], [17145, 20325, 8], [20325, 23141, 9], [23141, 25642, 10], [25642, 28997, 11], [28997, 32417, 12], [32417, 33884, 13], [33884, 37191, 14], [37191, 37435, 15], [37435, 41020, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41020, 0.02844]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
de526f79590f2d6b29f2073ace948a407410bf13
|
The OCB Authenticated-Encryption Algorithm
Abstract
This document specifies OCB, a shared-key blockcipher-based encryption scheme that provides confidentiality and authenticity for plaintexts and authenticity for associated data. This document is a product of the Crypto Forum Research Group (CFRG).
Status of This Memo
This document is not an Internet Standards Track specification; it is published for informational purposes.
This document is a product of the Internet Research Task Force (IRTF). The IRTF publishes the results of Internet-related research and development activities. These results might not be suitable for deployment. This RFC represents the consensus of the Crypto Forum Research Group of the Internet Research Task Force (IRTF). Documents approved for publication by the IRSG are not a candidate for any level of Internet Standard; see Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7253.
Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.
1. Introduction
Schemes for authenticated encryption (AE) simultaneously provide for confidentiality and authentication. While this goal would traditionally be achieved by melding separate encryption and authentication mechanisms, each using its own key, integrated AE schemes intertwine what is needed for confidentiality and what is needed for authenticity. By conceptualizing AE as a single cryptographic goal, AE schemes are less likely to be misused than conventional encryption schemes. Also, integrated AE schemes can be significantly faster than what one sees from composing separate confidentiality and authenticity means.
When an AE scheme allows for the authentication of unencrypted data at the same time that a plaintext is being encrypted and authenticated, the scheme is an authenticated encryption with associated data (AEAD) scheme. Associated data can be useful when, for example, a network packet has unencrypted routing information and an encrypted payload.
OCB is an AEAD scheme that depends on a blockcipher. This document fully defines OCB encryption and decryption except for the choice of the blockcipher and the length of authentication tag that is part of the ciphertext. The blockcipher must have a 128-bit blocksize. Each choice of blockcipher and tag length specifies a different variant of OCB. Several AES-based variants are defined in Section 3.1.
OCB encryption and decryption employ a nonce N, which must be
distinct for each invocation of the OCB encryption operation. OCB
requires the associated data A to be specified when one encrypts or
decrypts, but it may be zero-length. The plaintext P and the
associated data A can have any bitlength. The ciphertext C one gets
by encrypting P in the presence of A consists of a ciphertext-core
having the same length as P, plus an authentication tag. One can
view the resulting ciphertext as either the pair (ciphertext-core,
tag) or their concatenation (ciphertext-core || tag), the difference
being purely how one assembles and parses ciphertexts. This document
uses concatenation.
OCB encryption protects the confidentiality of P and the authenticity
of A, N, and P. It does this using, on average, about a + m + 1.02
blockcipher calls, where a is the blocklength of A, m is the
blocklength of P, and the nonce N is implemented as a counter (if N
is random, then OCB uses a + m + 2 blockcipher calls). If A is fixed
during a session, then, after preprocessing, there is effectively no
cost to having A authenticated on subsequent encryptions, and the
mode will average m + 1.02 blockcipher calls. OCB requires a single
key K for the underlying blockcipher, and all blockcipher calls are
keyed by K. OCB is online. In particular, one need not know the
length of A or P to proceed with encryption, nor need one know the
length of A or C to proceed with decryption. OCB is parallelizable:
the bulk of its blockcipher calls can be performed simultaneously.
Computational work beyond blockcipher calls consists of a small and
fixed number of logical operations per call. OCB enjoys provable
security: the mode of operation is secure assuming that the
underlying blockcipher is secure. As with most modes of operation,
security degrades as the number of blocks processed gets large (see
Section 5 for details).
For reasons of generality, OCB is defined to operate on arbitrary
bitstrings. But for reasons of simplicity and efficiency, most
implementations will assume that strings operated on are bytestrings
(i.e., strings that are a multiple of 8 bits). To promote
interoperability, implementations of OCB that communicate with
implementations of unknown capabilities should restrict all provided
values (nonces, tags, plaintexts, ciphertexts, and associated data)
to bytestrings.
The version of OCB defined in this document is a refinement of two
prior schemes. The original OCB version was published in 2001 [OCB1]
and was listed as an optional component in IEEE 802.11i. A second
version was published in 2004 [OCB2] and is specified in ISO 19772.
The scheme described here is called OCB3 in the 2011 paper describing
the mode [OCB3]; it shall be referred to simply as OCB throughout
this document. The only difference between the algorithm of this RFC
Krovetz & Rogaway
Informational
[Page 3]
and that of the [OCB3] paper is that the tag length is now encoded into the internally formatted nonce. See [OCB3] for complete references, timing information, and a discussion of the differences between the algorithms. OCB was initially the acronym for Offset Codebook but is now the algorithm’s full name.
OCB has received years of in-depth analysis previous to its submission to the CFRG and has been under review by the members of the CFRG for over a year. It is the consensus of the CFRG that the security mechanisms provided by the OCB AEAD algorithm described in this document are suitable for use in providing confidentiality and authentication.
2. Notation and Basic Operations
There are two types of variables used in this specification, strings and integers. Although strings processed by most implementations of OCB will be strings of bytes, bit-level operations are used throughout this specification document for defining OCB. String variables are always written with an initial uppercase letter while integer variables are written in all lowercase. Following C’s convention, a single equals ("=") indicates variable assignment and double equals ("==") is the equality relation. Whenever a variable is followed by an underscore ("_"), the underscore is intended to denote a subscript, with the subscripted expression requiring evaluation to resolve the meaning of the variable. For example, when i == 2, then P_i refers to the variable P_2.
\[ c^i \]
\[ \text{The integer } c \text{ raised to the } i\text{-th power.} \]
\[ \text{bitlen}(S) \]
\[ \text{The length of string } S \text{ in bits (e.g., bitlen(101) == 3).} \]
\[ \text{zeros}(n) \]
\[ \text{The string made of } n \text{ zero bits.} \]
\[ \text{ntz}(n) \]
\[ \text{The number of trailing zero bits in the base-2 representation of the positive integer } n. \text{ More formally, ntz}(n) \text{ is the largest integer } x \text{ for which } 2^x \text{ divides } n. \]
\[ S \text{ xor } T \]
\[ \text{The string that is the bitwise exclusive-or of } S \text{ and } T. \text{ Strings } S \text{ and } T \text{ will always have the same length.} \]
\[ S[i] \]
\[ \text{The } i\text{-th bit of the string } S \text{ (indices begin at 1, so if } S \text{ is } 011, \text{ then } S[1] == 0, S[2] == 1, S[3] == 1).} \]
\[ S[i..j] \]
\[ \text{The substring of } S \text{ consisting of bits } i \text{ through } j, \text{ inclusive.} \]
S || T String S concatenated with string T (e.g., 000 || 111 == 000111).
str2num(S) The base-2 interpretation of bitstring S (e.g., str2num(1110) == 14).
num2str(i,n) The n-bit string whose base-2 interpretation is i (e.g., num2str(14,4) == 1110 and num2str(1,2) == 01).
double(S) If S[1] == 0, then double(S) == (S[2..128] || 0); otherwise, double(S) == (S[2..128] || 0) xor (zeros(120) || 10000111).
3. OCB Global Parameters
To be complete, the algorithms in this document require specification of two global parameters: a blockcipher operating on 128-bit blocks and the length of authentication tags in use.
Specifying a blockcipher implicitly defines the following symbols.
KEYLEN The blockcipher’s key length in bits.
ENCIPHER(K,P) The blockcipher function mapping 128-bit plaintext block P to its corresponding ciphertext block using KEYLEN-bit key K.
DECIPHER(K,C) The inverse blockcipher function mapping 128-bit ciphertext block C to its corresponding plaintext block using KEYLEN-bit key K.
The TAGLEN parameter specifies the length of authentication tag used by OCB and may be any value up to 128. Greater values for TAGLEN provide greater assurances of authenticity, but ciphertexts produced by OCB are longer than their corresponding plaintext by TAGLEN bits. See Section 5 for details about TAGLEN and security.
As an example, if 128-bit authentication tags and AES with 192-bit keys are to be used, then KEYLEN is 192, ENCIPHER refers to the AES-192 cipher, DECIPHER refers to the AES-192 inverse cipher, and TAGLEN is 128 [AES].
3.1. Named OCBParameter Sets and RFC 5116 Constants
The following table gives names to common OCB global parameter sets. Each of the AES variants is defined in [AES].
<table>
<thead>
<tr>
<th>Name</th>
<th>Blockcipher</th>
<th>TAGLEN</th>
</tr>
</thead>
<tbody>
<tr>
<td>AEAD_AES_128_OCB_TAGLEN128</td>
<td>AES-128</td>
<td>128</td>
</tr>
<tr>
<td>AEAD_AES_128_OCB_TAGLEN96</td>
<td>AES-128</td>
<td>96</td>
</tr>
<tr>
<td>AEAD_AES_128_OCB_TAGLEN64</td>
<td>AES-128</td>
<td>64</td>
</tr>
<tr>
<td>AEAD_AES_192_OCB_TAGLEN128</td>
<td>AES-192</td>
<td>128</td>
</tr>
<tr>
<td>AEAD_AES_192_OCB_TAGLEN96</td>
<td>AES-192</td>
<td>96</td>
</tr>
<tr>
<td>AEAD_AES_192_OCB_TAGLEN64</td>
<td>AES-192</td>
<td>64</td>
</tr>
<tr>
<td>AEAD_AES_256_OCB_TAGLEN128</td>
<td>AES-256</td>
<td>128</td>
</tr>
<tr>
<td>AEAD_AES_256_OCB_TAGLEN96</td>
<td>AES-256</td>
<td>96</td>
</tr>
<tr>
<td>AEAD_AES_256_OCB_TAGLEN64</td>
<td>AES-256</td>
<td>64</td>
</tr>
</tbody>
</table>
RFC 5116 defines an interface for authenticated-encryption schemes [RFC5116]. RFC 5116 requires the specification of certain constants for each named AEAD scheme. For each of the OCB parameter sets listed above: P_MAX, A_MAX, and C_MAX are all unbounded; N_MIN is 1 byte, and N_MAX is 15 bytes. The parameter sets indicating the use of AES-128, AES-192, and AES-256 have K_LEN equal to 16, 24, and 32 bytes, respectively.
Each ciphertext is longer than its corresponding plaintext by exactly TAGLEN bits, and TAGLEN is given at the end of each name. For instance, an AEAD_AES_128_OCB_TAGLEN64 ciphertext is exactly 64 bits longer than its corresponding plaintext.
4. OCB Algorithms
OCB is described in this section using pseudocode. Given any collection of inputs of the required types, following the pseudocode description for a function will produce the correct output of the promised type.
4.1. Processing Associated Data: HASH
OCB has the ability to authenticate unencrypted associated data at the same time that it provides for authentication and encrypts a plaintext. The following hash function is central to providing this functionality. If an application has no associated data, then the associated data should be considered to exist and to be the empty string. HASH, conveniently, always returns zeros(128) when the associated data is the empty string.
Function name:
HASH
Input:
K, string of KEYLEN bits // Key
A, string of any length // Associated data
Output:
Sum, string of 128 bits // Hash result
Sum is defined as follows.
//
// Key-dependent variables
//
L_* = ENCIPHER(K, zeros(128))
L_*$ = double(L_*)
L_0 = double(L_*)
L_i = double(L_{i-1}) for every integer i > 0
//
// Consider A as a sequence of 128-bit blocks
//
Let m be the largest integer so that 128m <= bitlen(A)
Let A_1, A_2, ..., A_m and A_* be strings so that
A == A_1 || A_2 || ... || A_m || A_*, and
bitlen(A_i) == 128 for each 1 <= i <= m.
Note: A_* may possibly be the empty string.
//
// Process any whole blocks
//
Sum_0 = zeros(128)
Offset_0 = zeros(128)
for each 1 <= i <= m
Offset_i = Offset_{i-1} xor L_{ntz(i)}
Sum_i = Sum_{i-1} xor ENCIPHER(K, A_i xor Offset_i)
end for
//
// Process any final partial block; compute final hash value
//
if bitlen(A_*) > 0 then
Offset_* = Offset_m xor L_*
CipherInput = (A_* || 1 || zeros(127-bitlen(A_*))) xor Offset_*
Sum = Sum_m xor ENCIPHER(K, CipherInput)
else
Sum = Sum_m
end if
4.2. Encryption: OCB-ENCRYPT
This function computes a ciphertext (which includes a bundled authentication tag) when given a plaintext, associated data, nonce, and key. For each invocation of OCB-ENCRYPT using the same key K, the value of the nonce input N must be distinct.
Function name:
OCB-ENCRYPT
Input:
- K, string of KEYLEN bits // Key
- N, string of no more than 120 bits // Nonce
- A, string of any length // Associated data
- P, string of any length // Plaintext
Output:
- C, string of length bitlen(P) + TAGLEN bits // Ciphertext
C is defined as follows.
```
// // Key-dependent variables //
L_* = ENCIPHER(K, zeros(128))
L_$ = double(L_*)
L_0 = double(L$_)
L_i = double(L_{i-1}) for every integer i > 0
// // Consider P as a sequence of 128-bit blocks //
Let m be the largest integer so that 128m <= bitlen(P)
Let P_1, P_2, ..., P_m and P_* be strings so that
P == P_1 || P_2 || ... || P_m || P_*, and
bitlen(P_i) == 128 for each 1 <= i <= m.
Note: P_* may possibly be the empty string.
// // Nonce-dependent and per-encryption variables //
Nonce = num2str(TAGLEN mod 128,7) || zeros(120-bitlen(N)) || 1 || N
bottom = str2num(Nonce[123..128])
Ktop = ENCIPHER(K, Nonce[1..122] || zeros(6))
Stretch = Ktop || (Ktop[1..64] xor Ktop[9..72])
Offset_0 = Stretch[1+bottom..128+bottom]
Checksum_0 = zeros(128)
```
// // Process any whole blocks
// for each 1 <= i <= m
Offset_i = Offset_{i-1} xor L_{ntz(i)}
C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i)
Checksum_i = Checksum_{i-1} xor P_i
end for
// // Process any final partial block and compute raw tag
// if bitlen(P_*) > 0 then
Offset_* = Offset_m xor L_*
Pad = ENCIPHER(K, Offset_*)
C_* = P_* xor Pad[1..bitlen(P_*)]
Checksum_* = Checksum_m xor (P_* || 1 || zeros(127-bitlen(P_*)))
Tag = ENCIPHER(K, Checksum_* xor Offset_* xor L_*) xor HASH(K,A)
else
C_* = <empty string>
Tag = ENCIPHER(K, Checksum_m xor Offset_m xor L_*) xor HASH(K,A)
end if
// // Assemble ciphertext
// C = C_1 || C_2 || ... || C_m || C_* || Tag[1..TAGLEN]
4.3. Decryption: OCB-DECRYPT
This function computes a plaintext when given a ciphertext, associated data, nonce, and key. An authentication tag is embedded in the ciphertext. If the tag is not correct for the ciphertext, associated data, nonce, and key, then an INVALID signal is produced.
Function name:
OCB-DECRYPT
Input:
K, string of KEYLEN bits // Key
N, string of no more than 120 bits // Nonce
A, string of any length // Associated data
C, string of at least TAGLEN bits // Ciphertext
Output:
P, string of length bitlen(C) - TAGLEN bits, // Plaintext
or INVALID indicating authentication failure
P is defined as follows.
// Key-dependent variables
//
// L_* = ENCIPHER(K, zeros(128))
L_# = double(L_*)
L_0 = double(L_#)
L_i = double(L_(i-1)) for every integer i > 0
// Consider C as a sequence of 128-bit blocks
//
// Let m be the largest integer so that 128m <= bitlen(C) - TAGLEN
// Let C_1, C_2, ..., C_m, C_* and T be strings so that
// C == C_1 || C_2 || ... || C_m || C_* || T,
// bitlen(C_i) == 128 for each 1 <= i <= m, and
// bitlen(T) == TAGLEN.
// Note: C_* may possibly be the empty string.
// Nonce-dependent and per-decryption variables
//
//Nonce = num2str(TAGLEN mod 128,7) || zeros(120-bitlen(N)) || 1 || N
//bottom = str2num(Nonce[123..128])
//Ktop = ENCIPHER(K, Nonce[1..122] || zeros(6))
//Stretch = Ktop || (Ktop[1..64] xor Ktop[9..72])
//Offset_0 = Stretch[1+bottom..128+bottom]
//Checksum_0 = zeros(128)
// Process any whole blocks
//
//for each 1 <= i <= m
// Offset_i = Offset_{i-1} xor L_{ntz(i)}
// P_i = Offset_i xor DECIPHER(K, C_i xor Offset_i)
// Checksum_i = Checksum_{i-1} xor P_i
//end for
// Process any final partial block and compute raw tag
//
//if bitlen(C_*) > 0 then
// Offset_* = Offset_m xor L_*
// Pad = ENCIPHER(K, Offset_*)
// P_* = C_* xor Pad[1..bitlen(C_*)]
// Checksum_* = Checksum_m xor (P_* || 1 || zeros(127-bitlen(P_*)))
// Tag = ENCIPHER(K, Checksum_* xor Offset_* xor L_#) xor HASH(K,A)
else
P_* = <empty string>
Tag = ENCIPHER(K, Checksum_m xor Offset_m xor L_&) xor HASH(K,A)
end if
// Check for validity and assemble plaintext
//
if (Tag[1..TAGLEN] == T) then
P = P_1 || P_2 || ... || P_m || P_*
else
P = INVALID
end if
5. Security Considerations
OCB achieves two security properties, confidentiality and authenticity. Confidentiality is defined via "indistinguishability from random bits", meaning that an adversary is unable to distinguish OCB outputs from an equal number of random bits. Authenticity is defined via "authentication of ciphertexts", meaning that an adversary is unable to produce any valid nonce-ciphertext pair that it has not already acquired. The security guarantees depend on the underlying blockcipher being secure in the sense of a strong pseudorandom permutation. Thus, if OCB is used with a blockcipher that is not secure as a strong pseudorandom permutation, the security guarantees vanish. The need for the strong pseudorandom permutation property means that OCB should be used with a conservatively designed, well-trusted blockcipher, such as AES.
Both the confidentiality and the authenticity properties of OCB degrade as per s^2 / 2^128, where s is the total number of blocks that the adversary acquires. The consequence of this formula is that the proven security disappears when s becomes as large as 2^64. Thus, the user should never use a key to generate an amount of ciphertext that is near to, or exceeds, 2^64 blocks. In order to ensure that s^2 / 2^128 remains small, a given key should be used to encrypt at most 2^48 blocks (2^55 bits or 4 petabytes), including the associated data. To ensure these limits are not crossed, automated key management is recommended in systems exchanging large amounts of data [RFC4107].
When a ciphertext decrypts as INVALID, it is the implementor’s responsibility to make sure that no information beyond this fact is made adversarially available.
OCB encryption and decryption produce an internal 128-bit authentication tag. The parameter TAGLEN determines how many bits of
this internal tag are included in ciphertexts and used for authentication. The value of \( \text{TAGLEN} \) has two impacts: an adversary can trivially forge with probability \( 2^{\text{-TAGLEN}} \), and ciphertexts are \( \text{TAGLEN} \) bits longer than their corresponding plaintexts. It is up to the application designer to choose an appropriate value for \( \text{TAGLEN} \). Long tags cost no more computationally than short ones.
Normally, a given key should be used to create ciphertexts with a single tag length, \( \text{TAGLEN} \), and an application should reject any ciphertext that claims authenticity under the same key but a different tag length. While the ciphertext core and all of the bits of the tag do depend on the tag length, this is done for added robustness to misuse and should not suggest that receivers accept ciphertexts employing variable tag lengths under a single key.
Timing attacks are not a part of the formal security model and an implementation should take care to mitigate them in contexts where this is a concern. To render timing attacks impotent, the amount of time to encrypt or decrypt a string should be independent of the key and the contents of the string. The only explicitly conditional OCB operation that depends on private data is \text{double}(), which means that using constant-time blockcipher and \text{double}() implementations eliminates most (if not all) sources of timing attacks on OCB. Power-usage attacks are likewise out of the scope of the formal model and should be considered for environments where they are threatening.
The OCB encryption scheme reveals in the ciphertext the length of the plaintext. Sometimes the length of the plaintext is a valuable piece of information that should be hidden. For environments where "traffic analysis" is a concern, techniques beyond OCB encryption (typically involving padding) would be necessary.
Defining the ciphertext that results from OCB-ENCRYPT to be the pair \( (C_1 \ || \ C_2 \ || \ ... \ || \ C_m \ || \ C_* \ || \ \text{Tag}[1..\text{TAGLEN}] ) \) instead of the concatenation \( C_1 \ || \ C_2 \ || \ ... \ || \ C_m \ || \ C_* \ || \ \text{Tag}[1..\text{TAGLEN}] \) introduces no security concerns. Because \( \text{TAGLEN} \) is fixed, both versions allow ciphertexts to be parsed unambiguously.
5.1. Nonce Requirements
It is crucial that, as one encrypts, one does not repeat a nonce. The inadvertent reuse of the same nonce by two invocations of the OCB encryption operation, with the same key, but with distinct plaintext values, undermines the confidentiality of the plaintexts protected in those two invocations and undermines all of the authenticity and integrity protection provided by that key. For this reason, OCB should only be used whenever nonce uniqueness can be provided with certainty. Note that it is acceptable to input the same nonce value
multiple times to the decryption operation. We emphasize that the security consequences are quite serious if an attacker observes two ciphertexts that were created using the same nonce and key values, unless the plaintext and associated data values in both invocations of the encrypt operation were identical. First, a loss of confidentiality ensues because the attacker will be able to infer relationships between the two plaintext values. Second, a loss of authenticity ensues because the attacker will be able to recover secret information used to provide authenticity, making subsequent forgeries trivial. Note that there are AEAD schemes, particularly the Synthetic Initialization Vector (SIV) [RFC5297], appropriate for environments where nonces are unavailable or unreliable. OCB is not such a scheme.
Nonces need not be secret, and a counter may be used for them. If two parties send OCB-encrypted plaintexts to one another using the same key, then the space of nonces used by the two parties must be partitioned so that no nonce that could be used by one party to encrypt could be used by the other to encrypt (e.g., odd and even counters).
6. IANA Considerations
The Internet Assigned Numbers Authority (IANA) has defined a registry for Authenticated Encryption with Associated Data parameters. The IANA has added the following entries to the AEAD Registry. Each name refers to a set of parameters defined in Section 3.1.
<table>
<thead>
<tr>
<th>Name</th>
<th>Reference</th>
<th>Numeric ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>AEAD_AES_128_OCB_TAGLEN128</td>
<td>Section 3.1</td>
<td>20</td>
</tr>
<tr>
<td>AEAD_AES_128_OCB_TAGLEN96</td>
<td>Section 3.1</td>
<td>21</td>
</tr>
<tr>
<td>AEAD_AES_128_OCB_TAGLEN64</td>
<td>Section 3.1</td>
<td>22</td>
</tr>
<tr>
<td>AEAD_AES_192_OCB_TAGLEN128</td>
<td>Section 3.1</td>
<td>23</td>
</tr>
<tr>
<td>AEAD_AES_192_OCB_TAGLEN96</td>
<td>Section 3.1</td>
<td>24</td>
</tr>
<tr>
<td>AEAD_AES_192_OCB_TAGLEN64</td>
<td>Section 3.1</td>
<td>25</td>
</tr>
<tr>
<td>AEAD_AES_256_OCB_TAGLEN128</td>
<td>Section 3.1</td>
<td>26</td>
</tr>
<tr>
<td>AEAD_AES_256_OCB_TAGLEN96</td>
<td>Section 3.1</td>
<td>27</td>
</tr>
<tr>
<td>AEAD_AES_256_OCB_TAGLEN64</td>
<td>Section 3.1</td>
<td>28</td>
</tr>
</tbody>
</table>
7. Acknowledgements
The design of the original OCB scheme [OCB1] was done while Rogaway was at Chiang Mai University, Thailand. Follow-up work [OCB2] was done with support of NSF grant 0208842 and a gift from Cisco. The final work by Krovetz and Rogaway [OCB3] that has resulted in this
specification was supported by NSF grant 0904380. Thanks go to the
many members of the Crypto Forum Research Group (CFRG) who provided
feedback on earlier drafts. Thanks in particular go to David McGrew
for contributing some text and for managing the RFC approval process,
to James Manger for initiating a productive discussion on tag-length
dependency and for greatly improving Appendix A, to Matt Caswell and
Peter Dettman for writing implementations and verifying test vectors,
and to Stephen Farrell and Spencer Dawkins for their careful reading
and suggestions.
8. References
8.1. Normative References
[AES] National Institute of Standards and Technology, "Advanced
8.2. Informative References
[OCB1] Rogaway, P., Bellare, M., Black, J., and T. Krovetz, "OCB:
A Block-Cipher Mode of Operation for Efficient
Authenticated Encryption", ACM Conference on Computer and
[OCB2] Rogaway, P., "Efficient Instantiations of Tweakable
Blockciphers and Refinements to Modes OCB and PMAC",
[OCB3] Krovetz, T. and P. Rogaway, "The Software Performance of
Authenticated-Encryption Modes", Fast Software Encryption
Key Management", BCP 107, RFC 4107, June 2005.
[RFC5297] Harkins, D., "Synthetic Initialization Vector (SIV)
Authenticated Encryption Using the Advanced Encryption
Appendix A. Sample Results
This section gives sample output values for various inputs when using OCB with AES as per the parameters defined in Section 3.1. All strings are represented in hexadecimal (e.g., 0F represents the bitstring 00001111).
The following 16 (N,A,P,C) tuples show the ciphertext C that results from OCB-ENCRYPT(K,N,A,P) for various lengths of associated data (A) and plaintext (P). The key (K) has a fixed value, the tag length is 128 bits, and the nonce (N) increments.
K : 000102030405060708090A0B0C0D0E0F
An empty entry indicates the empty string.
N: BBAA99887766554433221100
A:
P: 785407BFFFC8AD9EDCC5520AC9111EE6
C: BBAA99887766554433221101
A: 0001020304050607
P: 000102030405060708090A0B0C0D0E0F
C: 6820B3657B6F615A5725BDA0D3B4EB3A257C9AF1F8F03009
N: BBAA99887766554433221102
A: 0001020304050607
P: 000102030405060708090A0B0C0D0E0F
C: 81017F8203F081277152FADE694A0A00
N: BBAA99887766554433221103
A: 0001020304050607
P: 000102030405060708090A0B0C0D0E0F
C: 45DD69F8F5AAE72414054CD1F35D82760B2CD0D2F99BFA9
N: BBAA99887766554433221104
A: 000102030405060708090A0B0C0D0E0F
P: 000102030405060708090A0B0C0D0E0F
C: 571D535B60B277188BE5147170A9A22C3AD7A4FF3835B8C5701C1CEC8FC3358
N: BBAA99887766554433221105
A: 000102030405060708090A0B0C0D0E0F
P: 000102030405060708090A0B0C0D0E0F
C: 8CF761B6902EF764462AD86498CA6B97
N: BBAA99887766554433221106
A: 000102030405060708090A0B0C0D0E0F
P: 000102030405060708090A0B0C0D0E0F1011121314151617
C: 5CE88SEC2E692706A915C00AE88B2396F40E1C743F52436BDF06D8FA1ECA343D
N: BBAA99887766554433221107
A: 000102030405060708090A0B0C0D0E0F1011121314151617
P: 000102030405060708090A0B0C0D0E0F1011121314151617
C: 1CA2207308C87C010756104D8840CE1952F09673A448A122C92C62241051F57356D7F3C905BB0E07F
N: BBAA99887766554433221108
A: 000102030405060708090A0B0C0D0E0F1011121314151617
P: 000102030405060708090A0B0C0D0E0F1011121314151617
C: 6DC225A071FC1B9F7C69F93B0F1E10DE
N: BBAA99887766554433221109
A: 000102030405060708090A0B0C0D0E0F1011121314151617
P: 000102030405060708090A0B0C0D0E0F1011121314151617
C: 221BD0DE7FA6FE993ECCD769460A0AF2D6CDED0C395B1C3C725F32494B9F914D85C0B1EB38357FF
N: BBAA99887766554433221110A
A: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
P: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
C: BD6F6C496201C69296C11EFD138A4674BD3C707924B964DEAFFC40319AF5A48540FBB186C5553C68AD9F592A79A4240
N: BBAA99887766554433221110B
A: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
P: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
C: FE80690BEE8A485D11F32965BC9D2A32
N: BBAA99887766554433221110C
A: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
P: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
C: 2942BFC773BDA23CABC6ACFD9BF58E35DB300F0973792EF46040C53F1432BCDF5E1DE3BC18A5F840B52E6534445DF
Krovetz & Rogaway Informational [Page 16]
Next are several internal values generated during the OCB-ENCRYPT computation for the last test vector listed above.
\[
\begin{align*}
L_* & : C6A13B37878F5B826F4F8162A1C8D879 \\
L_$ & : 8D42766F0F1EB704DE9F02C54391B075 \\
L_0 & : 1A84ECDE1E3DE09BD3E058A8723606D \\
L_1 & : 3509D9BC3C7ADC137AC7C0B150E46C0DA \\
\text{bottom} & : 15 (\text{decimal}) \\
Ktop & : 9862B0FDEE42DD56DBA6433F0125AA2 \\
\text{Stretch} & : 9862B0FDEE42DD56DBA6433F0125AA2FAD24D13A063F8B8 \\
\text{Offset}_0 & : 587EF72716EAB6DD3219F8092D517D69 \\
\text{Offset}_1 & : 42FA1BF908D78D48F27FD83AA721D04 \\
\text{Offset}_2 & : 77F3C24534AD04C7F55BF696A434DDDE \\
\text{Offset}_* & : B152F972B3225F459A1477F405FC05A7 \\
\text{Checksum}_1 & : 000102030405060708090A0B0C0D0E0F \\
\text{Checksum}_2 & : 101010101010101010101010101010101010 \\
\text{Checksum}_* & : 303132333435363790101010101010101010
\end{align*}
\]
The next tuple shows a result with a tag length of 96 bits and a different key.
K: 0F0E0D0C0B0A09080706050403020100
N: BBAA9988776655443322110D
A: 18191A1B1C1D1E1F2021222324252627
P: 18191A1B1C1D1E1F2021222324252627
C: 1792A4E31E0755FB03E31B22116E6C2DF9EFDF6E33D536F1
A0124B0A558AE884ED93481529C76B6AD0C515F4D1CDD4FD
AC4F02AA
The following algorithm tests a wider variety of inputs. Results are given for each parameter set defined in Section 3.1.
K = zeros(KEYLEN-8) || num2str(TAGLEN,8)
C = <empty string>
for i = 0 to 127 do
S = zeros(8i)
N = num2str(3i+1,96)
C = C || OCB-ENCRYPT(K,N,S,S)
N = num2str(3i+2,96)
C = C || OCB-ENCRYPT(K,N,<empty string>,S)
N = num2str(3i+3,96)
C = C || OCB-ENCRYPT(K,N,S,<empty string>)
end for
N = num2str(385,96)
Output : OCB-ENCRYPT(K,N,C,<empty string>)
Iteration i of the loop adds 2i + (3 * TAGLEN / 8) bytes to C, resulting in an ultimate length for C of 22,400 bytes when TAGLEN == 128, 20,864 bytes when TAGLEN == 192, and 19,328 bytes when TAGLEN == 64. The final OCB-ENCRYPT has an empty plaintext component, so serves only to authenticate C. The output should be:
AEAD_AES_128_OCB_TAGLEN128 Output: 67E944D23256C5E0B6C61FA22FDF1EA2
AEAD_AES_192_OCB_TAGLEN128 Output: F673F2C3E7174AAE7B9AE96CA9F29E17
AEAD_AES_256_OCB_TAGLEN128 Output: D90E8E9C77C88B79D79D7FFA161C
AEAD_AES_128_OCB_TAGLEN96 Output: 77A3D8E73589158B25D01209
AEAD_AES_192_OCB_TAGLEN96 Output: 05D56EAD2752C86BE6932C5E
AEAD_AES_256_OCB_TAGLEN96 Output: 5458359AC23B0CBA9E6330DD
AEAD_AES_128_OCB_TAGLEN64 Output: 192C9B7BD90BA06A
AEAD_AES_192_OCB_TAGLEN64 Output: 0066BC6E0EF34E24
AEAD_AES_256_OCB_TAGLEN64 Output: 7D4EA5D45501CBE
Authors’ Addresses
Ted Krovetz
Computer Science Department
California State University, Sacramento
6000 J Street
Sacramento, CA 95819-6021
USA
EMail: ted@krovetz.net
Phillip Rogaway
Computer Science Department
University of California, Davis
One Shields Avenue
Davis, CA 95616-8562
USA
EMail: rogaway@cs.ucdavis.edu
|
{"Source-Url": "http://potaroo.net/ietf/rfc/PDF/rfc7253.pdf", "len_cl100k_base": 9449, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 40383, "total-output-tokens": 11944, "length": "2e13", "weborganizer": {"__label__adult": 0.0004467964172363281, "__label__art_design": 0.0004203319549560547, "__label__crime_law": 0.001251220703125, "__label__education_jobs": 0.00043702125549316406, "__label__entertainment": 0.00012254714965820312, "__label__fashion_beauty": 0.0001856088638305664, "__label__finance_business": 0.0006208419799804688, "__label__food_dining": 0.0003921985626220703, "__label__games": 0.0008368492126464844, "__label__hardware": 0.00562286376953125, "__label__health": 0.0006365776062011719, "__label__history": 0.00035500526428222656, "__label__home_hobbies": 0.0001227855682373047, "__label__industrial": 0.000934123992919922, "__label__literature": 0.00027942657470703125, "__label__politics": 0.0004343986511230469, "__label__religion": 0.0006480216979980469, "__label__science_tech": 0.35888671875, "__label__social_life": 9.340047836303712e-05, "__label__software": 0.027252197265625, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.00035190582275390625, "__label__transportation": 0.0006694793701171875, "__label__travel": 0.00019681453704833984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31959, 0.14547]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31959, 0.56428]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31959, 0.74531]], "google_gemma-3-12b-it_contains_pii": [[0, 1483, false], [1483, 2867, null], [2867, 5766, null], [5766, 8179, null], [8179, 9742, null], [9742, 11918, null], [11918, 13049, null], [13049, 14452, null], [14452, 15754, null], [15754, 17114, null], [17114, 19200, null], [19200, 22085, null], [22085, 24455, null], [24455, 26148, null], [26148, 27489, null], [27489, 29049, null], [29049, 29934, null], [29934, 31615, null], [31615, 31959, null], [31959, 31959, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1483, true], [1483, 2867, null], [2867, 5766, null], [5766, 8179, null], [8179, 9742, null], [9742, 11918, null], [11918, 13049, null], [13049, 14452, null], [14452, 15754, null], [15754, 17114, null], [17114, 19200, null], [19200, 22085, null], [22085, 24455, null], [24455, 26148, null], [26148, 27489, null], [27489, 29049, null], [29049, 29934, null], [29934, 31615, null], [31615, 31959, null], [31959, 31959, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31959, null]], "pdf_page_numbers": [[0, 1483, 1], [1483, 2867, 2], [2867, 5766, 3], [5766, 8179, 4], [8179, 9742, 5], [9742, 11918, 6], [11918, 13049, 7], [13049, 14452, 8], [14452, 15754, 9], [15754, 17114, 10], [17114, 19200, 11], [19200, 22085, 12], [22085, 24455, 13], [24455, 26148, 14], [26148, 27489, 15], [27489, 29049, 16], [29049, 29934, 17], [29934, 31615, 18], [31615, 31959, 19], [31959, 31959, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31959, 0.04911]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.